Created 6 new core system functions to group all CORE service operations:
- setup_core_users(): Creates rclone, radarr, sonarr, prowlarr, decypharr, zilean, zurg
- setup_core_directories(): Creates all core config and data directories
- setup_core_permissions(): Sets permissions for core directories
- setup_core_files(): Copies/creates rclone.conf, zurg config, decypharr config
- start_core_services(): Starts 10 core Docker containers
- configure_core_services(): Configures radarr, sonarr, prowlarr, decypharr
These functions group ALL operations for core services (minimum functional system).
Next step: Create optional service functions and refactor PHASE 3/4.
Part of reorganization plan to prepare for dynamic service selection.
Added configure_decypharr_authentication() atomic function to configure
Decypharr authentication when user enables auth during setup.
Implementation:
- New atomic function: configure_decypharr_authentication(port, username, password)
- Uses Decypharr's specific endpoint: POST /api/update-auth
- Payload format: {username, password, confirm_password}
- Called after Sonarr authentication when AUTH_ENABLED=true
This ensures Decypharr is protected with the same credentials as
Radarr, Sonarr, and Prowlarr when authentication is enabled.
The debug logging was added temporarily to diagnose JSON syntax errors.
Issue is now fixed (DECYPHARR_CONTAINER_PORT was missing from .env.defaults).
Removing debug output to keep installation logs clean.
Problem:
- configure_arr_authentication was hardcoded to use /api/v3/ for all services
- Prowlarr uses /api/v1/ while Radarr/Sonarr use /api/v3/
- This caused 'Failed to get Prowlarr config for authentication setup' error
Solution:
- Added optional 6th parameter 'api_version' to configure_arr_authentication
- Defaults to 'v3' for Radarr/Sonarr (backward compatible)
- Pass 'v1' explicitly when calling for Prowlarr
This fixes Prowlarr authentication configuration during setup.
Problem:
- DECYPHARR_CONTAINER_PORT was referenced in setup.sh but not defined in .env.defaults
- This caused $DECYPHARR_CONTAINER_PORT to be empty, generating invalid JSON
- Invalid JSON error: "'}' is an invalid start of a value" in fields[1].value
Solution:
- Added DECYPHARR_CONTAINER_PORT=8282 to .env.defaults
- This is the internal container port (8282) vs the host port (8283)
- Docker containers communicate using internal ports within the network
This fixes the JSON parse error when adding Decypharr as download client.
Problem:
- configure_arr_service was passing $api_key (Radarr/Sonarr API key) twice
- Should pass $download_api_key (parameter 6) as the client API key
- This caused the wrong API key to be used for Decypharr configuration
Solution:
- Changed line 38 in setup-services.sh to use $download_api_key instead of $api_key
- Now correctly passes the download client's API key to add_download_client
Problem:
- Line 1290 had 'local status' inside a for loop in the main script scope
- 'local' can only be used inside functions in Bash
- This caused 17 errors: 'local: can only be used in a function'
Solution:
- Changed 'local status' to 'status=""' (regular variable initialization)
- The variable is properly scoped to the loop iteration
This fixes the error seen during service validation phase.
Problem:
- Radarr and Sonarr were trying to connect to Decypharr using the host port (8283)
- Docker containers must use internal container ports when communicating within the network
- Decypharr listens on port 8282 internally, mapped to 8283 on the host
Solution:
- Added DECYPHARR_CONTAINER_PORT=8282 to .env.defaults
- Updated configure_arr_service calls to use $DECYPHARR_CONTAINER_PORT (8282)
instead of $DECYPHARR_PORT (8283)
This fixes the "Connection refused (decypharr:8283)" error when adding
Decypharr as download client to Radarr and Sonarr.
Changed get_prowlarr_app_id() and trigger_prowlarr_sync() to accept port as parameter:
- get_prowlarr_app_id(port, api_key, app_name, output_var)
- trigger_prowlarr_sync(port, api_key, app_id)
This makes functions truly reusable for any Prowlarr instance on any port.
Updated all call sites to pass port 9696 explicitly.
Refactored remaining sections:
- Prowlarr indexers: Use api_post_request() for 4 indexers (saves repetitive curl calls)
- Prowlarr sync: Use get_prowlarr_app_id() + trigger_prowlarr_sync() (2 apps)
- Recyclarr: Use run_recyclarr_sync() (replaces 20+ lines of awk/docker run)
- Prowlarr auth: Use configure_arr_authentication()
- API keys save: Use append_to_file()
- Final restart: Use run_docker_compose_up()
Total refactoring summary (iterations 10-13):
- Created 13 atomic functions
- Reduced ~200 lines of repetitive code
- All functions called N times as designed
- Maintained 100% original behavior
- No new functions needed for this iteration
Refactored lines ~1214-1395:
- Use run_docker_compose_up() for Docker startup
- Use validate_docker_service() in loop for service validation
- Use get_docker_health_status() in loop for health checking
- Use wait_for_http_service() for HTTP service readiness
- Use wait_for_docker_health() for Decypharr
- Use configure_arr_authentication() for Radarr/Sonarr auth (replaces 60+ lines)
Removed inline wait_for_service() function (now using wait_for_http_service).
All functions called N times as designed. No new functions created.
Added atomic functions (4-13):
- wait_for_http_service(name, url, max_attempts, sleep_sec): Wait for HTTP service
- wait_for_docker_health(container, max_attempts, sleep_sec): Wait for healthy container
- api_get_request(url, api_key, output_var): Generic GET with API key
- api_put_request(url, api_key, json_data): Generic PUT with HTTP code validation
- api_post_request(url, api_key, json_data): Generic POST request
- configure_arr_authentication(service, port, api_key, user, pass): Configure *arr auth
- get_prowlarr_app_id(api_key, app_name, output_var): Get Prowlarr app ID
- trigger_prowlarr_sync(api_key, app_id): Trigger sync to ONE app
- run_recyclarr_sync(config_file, radarr_key, sonarr_key): Run recyclarr with injected keys
- append_to_file(file_path, content): Append to file
All functions are atomic and designed to be called N times.
Next: Refactor lines 935-1401 using these atomic functions.
Added atomic functions for Docker operations:
- run_docker_compose_up(compose_dir): Execute docker compose with validation
- validate_docker_service(service_name): Check if ONE service is running (call N times)
- get_docker_health_status(container_name, output_var): Get health status of ONE container
These functions will be used to refactor lines 935-1401 (service startup and configuration).
Next: Create remaining 10 atomic functions for API operations, healthchecks, and Prowlarr.
- Use create_file_from_content() for healthcheck test file
- Use create_folder() for healthcheck directory
- Convert cleanup prompt to ask_user_input()
- No new functions created, only reused existing validated functions
- Replaced mount healthcheck prompt with ask_user_input()
- Replaced cron job prompt with ask_user_input()
- Replaced auto-configuration prompt with ask_user_input()
- Used copy_file() for healthcheck script installation
- Used create_folder() for logs directory
- Used copy_file() for .env.local generation
- All prompts and file operations now use atomic functions
Eighth iteration - healthcheck and phase 4 prompts.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Created set_permissions(path, perms, owner) - reusable for any path
- Created copy_file(source, dest, owner, perms) - reusable for file copying
- Created download_file(url, dest, owner, perms) - reusable for downloads
- Replaced permission setting with multiple set_permissions() calls
- Replaced recyclarr/rclone file copies with copy_file()
- Replaced indexer downloads with download_file()
- All functions are 100% atomic and reusable
Sixth iteration - file operations atomic functions.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Restored docker-compose.yml from main branch to maintain full functionality
while we continue step-by-step refactoring. The file was previously modified
by compose-generator.sh which caused service validation to fail.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Removed duplicate echo at end of ask_password and reorganized blank lines
in authentication section to prevent double spacing between prompts.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Created check_uid_conflict() to verify and assign available UIDs
- Created check_gid_conflict() to verify and assign available GIDs
- Created create_env_install() to generate .env.install file
- Refactored UID/GID checking to iterate over users using atomic functions
- Simplified conflict detection logic with reusable components
Fourth iteration - atomic function composition for user management.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Prevents extra blank lines and separators when using ask_user_input
with empty title for follow-up prompts (like username after auth choice).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Created ask_password() reusable function for hidden password input
- Converted Service Authentication using: ask_user_input + ask_password
- Converted Traefik configuration using: ask_user_input (with conditional domain)
- All functions are reusable atomic components
Third iteration - atomic function composition approach.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replaced Timezone Configuration with ask_user_input()
- Replaced Real-Debrid API Token with ask_user_input() (required=true)
- Replaced Plex Claim Token with ask_user_input() (optional)
Second iteration - step by step validation approach.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Created check_root() function for root verification
- Created check_existing_config() for .env.install detection
- Created ask_user_input() as standard method for user prompts
- Replaced Installation Directory prompt with ask_user_input()
First iteration of modular refactoring - rest of code remains unchanged.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Added sleep action to setup-executor.sh to allow waiting for services
to fully initialize after Docker healthcheck passes. Applied 10s delay
after Decypharr healthcheck in both radarr.json and sonarr.json to
ensure qBittorrent API on port 8283 is fully ready before attempting
to add download client.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Reverted complex curl-based checks back to original logic.
Original setup.sh ONLY checks Docker healthcheck:
- docker inspect -f '{{.State.Health.Status}}'
- Waits until status = 'healthy'
- No additional HTTP verification needed
Previous attempts with curl were over-engineering.
Decypharr healthcheck is sufficient - it validates qBittorrent API is ready.
Matches original working behavior from main branch.
Problem:
- docker run curlimages/curl hangs on first execution
- Image download happens during wait loop
- Can exceed timeout or hang indefinitely
Solution:
- Check if curlimages/curl already present
- Pre-pull image BEFORE entering wait loop
- Subsequent docker run commands are instant
- Prevents timeout issues during service waits
- 60s timeout insufficient for first run
- curlimages/curl image needs to download (adds ~20-30s)
- Decypharr qBittorrent needs time to start
- 120s provides adequate buffer for both
Affects radarr.json and sonarr.json
CRITICAL: Previous curl test ran on HOST, not in Docker network
Problem:
- wait_for_service curl ran on ubuntu host
- Reported port 8283 accessible from host perspective
- But Radarr connects from INSIDE Docker network (mediacenter)
- Port may be accessible from host but NOT from Docker network
Solution:
- Use 'docker run --rm --network mediacenter curlimages/curl'
- This spawns ephemeral container in same network as services
- Tests connectivity exactly as Radarr/Sonarr will experience it
- Only reports ready when port accessible from Docker network perspective
This ensures wait_for_service validates the actual network path services use.
CRITICAL FIX: Docker healthy != port accessible
Problem:
- wait_for_service only checked Docker healthcheck status
- Decypharr healthcheck passes before qBittorrent API port 8283 is ready
- Resulted in 'Connection refused' when trying to add download client
Solution:
- After container reports healthy, verify port with curl request
- curl -sf checks if HTTP port actually responds
- 2 second connect timeout to fail fast
- Only returns success when BOTH healthy AND port accessible
This ensures services are truly ready before attempting API configuration.
- Decypharr was not fully ready when radarr/sonarr tried to connect
- Added wait_for_service step before add_download_client in both services
- This ensures Decypharr port 8283 is accessible before connection attempt
- Fixes HTTP 400 'Connection refused (decypharr:8283)' error
CRITICAL FIX: ((i++)) causes script termination with set -e
Problem:
- Bash arithmetic ((...)) returns the result of the expression
- ((i++)) increments i from 0 to 1 and returns 1 (the NEW value)
- With 'set -e', any command returning non-zero terminates the script
- This caused setup-executor to exit after first step completion
Solution:
- Use i=$((i + 1)) instead of ((i++))
- This always returns exit code 0
- Script can continue to subsequent steps
This was the final blocker preventing auto-configuration from working.
CRITICAL FIX: All log functions now write to stderr (>&2) instead of stdout.
Problem:
- When using command substitution like $(extract_api_key service)
- Log messages from extract_api_key were captured along with the API key
- This caused the variable to contain logs + API key instead of just API key
- Subsequent commands failed silently with 'set -e'
Solution:
- Redirect all echo output in log functions to stderr
- Only actual return values (via echo without >&2) go to stdout
- This allows clean command substitution without interference from logs
Affects: log_info, log_success, log_warning, log_error, log_debug, log_section, log_operation
- setup-executor.sh was failing silently when ROOT_DIR was undefined
- ROOT_DIR is used in recyclarr configuration but was never initialized
- Now defaults to /mediacenter if not set via environment variable
- Allows setup-executor.sh to run both from setup.sh and standalone
- Prevents setup-executor.sh from creating a new log directory
- Only initializes SETUP_LOG_DIR if not already set
- Allows parent script (setup.sh) log directory to be inherited
- Fixes 'No such file or directory' error in setup-executor.sh
Fixes error when writing templates with slashes to .env.install.
Without quotes, 'core mediaplayers/plex extras/overseerr' was treated
as a command instead of a string value.
Created comprehensive template configuration system with dependency management:
**Template Configuration Files:**
- Created template.conf for all extras (8 total)
- Dependencies properly configured:
* Plex-dependent: overseerr, tautulli, plextraktsync
* Standalone: homarr, traefik, dashdot, pinchflat, watchtower
**Template Selector Script:**
- scripts/template-selector.sh: Interactive template selection
- Validates dependencies automatically
- Filters available options based on media server choice
- Shows proper descriptions from template.conf
- Returns space-separated list of templates
**Key Features:**
- Media server selection (Plex or None)
- Dynamic extras filtering based on dependencies
- If Plex not selected: Hides Overseerr, Tautulli, PlexTraktSync
- If Plex selected: Shows all compatible extras
- Summary confirmation before proceeding
**Testing:**
✓ With Plex: Shows 8 optional services
✓ Without Plex: Shows only 5 standalone services
✓ Dependency validation works correctly
✓ Output format correct for compose-generator
**Files Added:**
- scripts/template-selector.sh (executable)
- 8x templates/extras/*/template.conf
- 4x templates/extras/*/services.list
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>