Reverts the incorrect change from Overseerr to Seerr. The project should
use Overseerr (sctx/overseerr) as the request management system.
Changes:
- Renamed docker/compose-services/seerr.yml to overseerr.yml
- Updated Docker image from ghcr.io/seerr-team/seerr to sctx/overseerr
- Changed all SEERR_* environment variables to OVERSEERR_*
- Updated container name from seerr to overseerr
- Updated system user from seerr to overseerr
- Updated config directory references (seerr-config → overseerr-config)
- Updated all documentation (README, INSTALLATION, CLAUDE.md, POST-INSTALL.md)
- Updated service dependencies in plex.yml
- Updated backup scripts to reference overseerr instead of seerr
Affected files: 14 modified, 1 deleted (seerr.yml), 1 added (overseerr.yml)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The debug logging was added temporarily to diagnose JSON syntax errors.
Issue is now fixed (DECYPHARR_CONTAINER_PORT was missing from .env.defaults).
Removing debug output to keep installation logs clean.
Problem:
- configure_arr_service was passing $api_key (Radarr/Sonarr API key) twice
- Should pass $download_api_key (parameter 6) as the client API key
- This caused the wrong API key to be used for Decypharr configuration
Solution:
- Changed line 38 in setup-services.sh to use $download_api_key instead of $api_key
- Now correctly passes the download client's API key to add_download_client
Reverted complex curl-based checks back to original logic.
Original setup.sh ONLY checks Docker healthcheck:
- docker inspect -f '{{.State.Health.Status}}'
- Waits until status = 'healthy'
- No additional HTTP verification needed
Previous attempts with curl were over-engineering.
Decypharr healthcheck is sufficient - it validates qBittorrent API is ready.
Matches original working behavior from main branch.
Problem:
- docker run curlimages/curl hangs on first execution
- Image download happens during wait loop
- Can exceed timeout or hang indefinitely
Solution:
- Check if curlimages/curl already present
- Pre-pull image BEFORE entering wait loop
- Subsequent docker run commands are instant
- Prevents timeout issues during service waits
CRITICAL: Previous curl test ran on HOST, not in Docker network
Problem:
- wait_for_service curl ran on ubuntu host
- Reported port 8283 accessible from host perspective
- But Radarr connects from INSIDE Docker network (mediacenter)
- Port may be accessible from host but NOT from Docker network
Solution:
- Use 'docker run --rm --network mediacenter curlimages/curl'
- This spawns ephemeral container in same network as services
- Tests connectivity exactly as Radarr/Sonarr will experience it
- Only reports ready when port accessible from Docker network perspective
This ensures wait_for_service validates the actual network path services use.
CRITICAL FIX: Docker healthy != port accessible
Problem:
- wait_for_service only checked Docker healthcheck status
- Decypharr healthcheck passes before qBittorrent API port 8283 is ready
- Resulted in 'Connection refused' when trying to add download client
Solution:
- After container reports healthy, verify port with curl request
- curl -sf checks if HTTP port actually responds
- 2 second connect timeout to fail fast
- Only returns success when BOTH healthy AND port accessible
This ensures services are truly ready before attempting API configuration.
CRITICAL FIX: All log functions now write to stderr (>&2) instead of stdout.
Problem:
- When using command substitution like $(extract_api_key service)
- Log messages from extract_api_key were captured along with the API key
- This caused the variable to contain logs + API key instead of just API key
- Subsequent commands failed silently with 'set -e'
Solution:
- Redirect all echo output in log functions to stderr
- Only actual return values (via echo without >&2) go to stdout
- This allows clean command substitution without interference from logs
Affects: log_info, log_success, log_warning, log_error, log_debug, log_section, log_operation
- Prevents setup-executor.sh from creating a new log directory
- Only initializes SETUP_LOG_DIR if not already set
- Allows parent script (setup.sh) log directory to be inherited
- Fixes 'No such file or directory' error in setup-executor.sh
**Problem:**
JSON responses were still being printed to stdout when calling:
- add_arr_to_prowlarr
- add_root_folder
- add_download_client
- add_remote_path_mapping
- delete_quality_profile
This caused dozens of lines of JSON to clutter the installation output.
**Root Cause:**
api_call() does "echo $response" to return the response body.
When functions call api_call but don't capture the output, it prints
to stdout instead of being suppressed.
**Solution:**
Redirect stdout to /dev/null for all api_call invocations that:
- Only check exit code (if api_call...)
- Don't need the response body
Functions that DO need the response (like get_quality_profiles) keep
their output since they explicitly capture it.
**Result:**
Clean installation output showing only success/error messages.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
**Problem:**
API calls were printing excessive debug information:
- Full JSON payloads (dozens of lines)
- DEBUG service port information
- Grep operation details
**Changes:**
1. Removed log_operation() call that printed full API URLs
2. Changed to log_trace() which only shows in verbose mode
3. Removed DEBUG logs showing ports and API key lengths
4. Removed GREP operation log from extract_api_key
**Result:**
Cleaner installation output. API operations now show only:
- "Adding radarr as application in Prowlarr"
- Success/error messages
Full details still available with verbose logging if needed.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
**Root Cause:**
tail -1 takes the last LINE, not the last output. When api_call returns
multi-line JSON, tail -1 only captures the last line ("]"), not the
entire JSON response.
**Why This Happened:**
1. api_call does: echo \"\$response\" where response is multi-line JSON
2. api_call also calls log_* functions that output to stdout
3. Attempting to filter with | tail -1 only got the last line: "]"
4. jq failed to parse and returned empty
**Test Results:**
```
profiles=\$(get_quality_profiles radarr 7878 \"\$API_KEY\" | tail -1)
echo \"\$profiles\"
# Output: ] <-- WRONG! Only last line of JSON
```
**Solution:**
Rewrite get_quality_profiles() to call curl DIRECTLY without going
through api_call() to avoid log pollution entirely:
```bash
get_quality_profiles() {
# Direct curl call - no logging contamination
local response=\$(curl -s -w '\\n%{http_code}' ...)
local http_code=\$(echo \"\$response\" | tail -n1)
local body=\$(echo \"\$response\" | sed '\$d')
if [[ \"\$http_code\" =~ ^2 ]]; then
echo \"\$body\" # Clean JSON, no logs
return 0
fi
}
```
Now the function returns ONLY the JSON body, no log messages, no
line-by-line issues. The JSON can be properly parsed by jq.
Also removed | tail -1 from remove_default_profiles since it's no
longer needed.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
**Problem:**
get_quality_profiles() calls api_call() which outputs log messages
to stdout along with the JSON response:
- log_function_enter
- log_operation
- log_trace (multiple)
- log_function_exit
- echo \"\$response\" (the actual JSON)
When remove_default_profiles did:
local profiles=\$(get_quality_profiles ...)
It captured ALL that output, not just the JSON. Then jq failed to
parse it and returned empty, so the function thought there were no
profiles to delete.
**Evidence from trace log:**
- HTTP 200 success
- Response length: 48569 bytes (has data!)
- But jq returned empty array
- Log: \"No quality profiles found\" (FALSE - they exist!)
**Solution:**
Add | tail -1 to capture only the last line (the JSON):
local profiles=\$(get_quality_profiles ... | tail -1)
This is the same pattern used everywhere else for API key extraction
and other function returns that have logging mixed with return values.
**Result:**
Now jq gets clean JSON, extracts profile IDs correctly, and actually
deletes the default profiles as intended.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
**Problems:**
1. Used 'while read' in pipe → creates subshell → logs invisible
2. Redirected errors to /dev/null → failures silent
3. Always showed SUCCESS even if nothing deleted
4. Used 'return 0' on GET failure → continued with empty profiles
**Result:**
Default quality profiles were NEVER deleted, even though log said:
"✓ Default quality profiles removed from radarr"
Users saw both default profiles (Any, SD, HD, etc.) AND Recyclarr profiles.
**Fix:**
1. **Abort on GET failure** - Can't continue without profiles list
2. **Use array instead of while read** - Avoids subshell, logs visible:
```bash
local profile_ids=(\$(echo \"\$profiles\" | jq -r '.[].id'))
for profile_id in \"\${profile_ids[@]}\"; do
```
3. **Count successes and failures** - Track what actually happened
4. **Removed 2>/dev/null** - Show actual delete errors
5. **Report real status** - "Removed N profile(s)" or warnings
6. **Log each deletion** - log_debug for troubleshooting
**Now shows:**
- "Found 6 quality profiles to remove"
- "Deleting profile ID 1 from radarr"
- "Removed 6 quality profile(s) from radarr"
- Or: "3 profile(s) could not be deleted (may be assigned to content)"
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
**Problems Fixed:**
1. **API keys contained ANSI color codes** (71 chars instead of 32)
- configure_arr_service() called extract_api_key() without | tail -1
- configure_prowlarr() same issue
- Result: API keys had escape sequences, causing JSON parsing errors
2. **Authentication messages were generic**
- "Configuring authentication..." didn't say which service
- "Authentication configured" didn't say which service
- User confused about what was actually configured
**Solutions:**
1. Added | tail -1 to ALL extract_api_key() calls:
- setup/lib/setup-services.sh: configure_arr_service()
- setup/lib/setup-services.sh: configure_prowlarr()
- These were missing the tail filter
2. Updated auth messages to be specific:
- "Configuring Radarr authentication..."
- "✓ Radarr authentication configured"
- Same for Sonarr
3. Changed configure_prowlarr return 1 to exit 1 for consistency
**Result:**
All API keys should now be exactly 32 characters with no escape codes,
and authentication messages clearly indicate which service is configured.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
**MAJOR BUG:**
The installer was continuing execution after critical errors, leading to:
- Infinite loops waiting for services that will never respond
- Cascading failures as dependent operations try to use invalid state
- User confusion as errors scroll by but script keeps running
**Problems Fixed:**
1. **configure_arr_service()** - Now exits if:
- API key extraction fails (was: return 1, continued)
- Root folder creation fails (was: log error, continued)
- Download client addition fails (was: log error, continued)
- Remote path mapping is non-critical, logs warning only
2. **add_arr_to_prowlarr()** calls in setup.sh - Now exits if:
- Failed to add Radarr to Prowlarr (was: log error, continued)
- Failed to add Sonarr to Prowlarr (was: log error, continued)
3. **API key extraction** in setup.sh - Now exits if:
- Any of the three API keys (Radarr/Sonarr/Prowlarr) are empty
- Shows which specific keys are missing
- Provides docker logs command to troubleshoot
**Result:**
- Script now STOPS IMMEDIATELY when a critical operation fails
- Clear error messages explain what failed and how to debug
- Logs are preserved for troubleshooting
- No more infinite loops or cascading failures
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
**Root Cause:**
The curl command was using single quotes around the X-Api-Key header:
-H 'X-Api-Key: $api_key'
In bash, single quotes prevent variable expansion, so the literal string
"$api_key" was being sent instead of the actual API key value.
**Fix:**
Changed to double quotes to allow variable expansion:
-H "X-Api-Key: $api_key"
This explains the HTTP 401 errors when adding applications to Prowlarr
and potentially other API operations. The API was receiving an invalid
API key (literally the string "$api_key").
**Testing:**
This should fix:
- Adding Radarr/Sonarr applications to Prowlarr (HTTP 401)
- Any other API operations that were silently failing
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Investigating HTTP 401 errors when adding Radarr/Sonarr to Prowlarr.
Added debug logging to show:
- Service and port information
- API key lengths (not the keys themselves for security)
- Function entry tracing with sanitized parameters
This will help diagnose whether the issue is:
- Empty/corrupted API keys
- API key formatting problems
- Other authentication issues
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Create logs in /tmp/sailarr-install-TIMESTAMP/ directory
- Add install.log: complete installation log with timestamps
- Add trace.log: detailed function tracing for debugging
- Add log_operation() for tracking file operations (COPY, MKDIR, DOWNLOAD, etc)
- Add log_debug() for debug-level messages
- Add function entry/exit tracing with parameters and return codes
- Log all API calls with HTTP status codes and response lengths
- Log file operations (copy, download, grep, etc) before execution
- Display log location at start and end of installation
- All logs include timestamps for precise debugging