Compare commits

...

23 Commits

Author SHA1 Message Date
github-actions[bot]
d6055f102b chore(main): release 4.29.0 (#1849)
🤖 I have created a release *beep* *boop*
---


## [4.29.0](https://github.com/unraid/api/compare/v4.28.2...v4.29.0)
(2025-12-19)


### Features

* replace docker overview table with web component (7.3+)
([#1764](https://github.com/unraid/api/issues/1764))
([277ac42](277ac42046))


### Bug Fixes

* handle race condition between guid loading and license check
([#1847](https://github.com/unraid/api/issues/1847))
([8b155d1](8b155d1f1c))
* resolve issue with "Continue" button when updating
([#1852](https://github.com/unraid/api/issues/1852))
([d099e75](d099e7521d))
* update myservers config references to connect config references
([#1810](https://github.com/unraid/api/issues/1810))
([e1e3ea7](e1e3ea7eb6))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-19 11:53:48 -05:00
Eli Bosley
d099e7521d fix: resolve issue with "Continue" button when updating (#1852)
- Replaced BrandLoading with BrandButton in UpdateOs component for
better user interaction.
- Updated test cases to reflect changes in rendering logic, ensuring the
account button is displayed when no reboot is pending.
- Added functionality to navigate to account update when the button is
clicked.
- Introduced WEBGUI_REDIRECT URL for handling update installations in
the store logic.
2025-12-19 11:44:19 -05:00
Pujit Mehrotra
bb9b539732 chore: fix local plugin builds & docs (#1851)
Raised by [MitchellThompkins](https://github.com/MitchellThompkins) in
#1848

- Documents how to use Docker to build a local Connect plugin
- Local Plugin flow will now build workspace packages before proceeding
with plugin infra + build
- Removes recommendation to run `pnpm build:watch` from root, as this
race conditions and build cache issues.
- Makes `pnpm dev` from root parallel, preventing servers from blocking
each other.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Documentation**
* Updated development workflow documentation to emphasize Docker-based
plugin builds
* Restructured development modes into three workflows: local Docker
builds, direct deployment, and development servers
  * Updated build and deployment instructions

* **Chores**
  * Modified dev script for parallel execution
  * Refactored build scripts with improved dependency handling

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-18 16:33:37 -05:00
Pujit Mehrotra
0e44e73bf7 chore(web): mv predev call to prebuild step (#1850)
Fixes #1848

## Background

The `build:dev` script is used for the `unraid:deploy` workflow, and it
implicitly triggered the `predev` script to build the `unraid-ui`
package as needed.

`web` builds depend on `unraid-ui`. In the past, `unraid-ui` was built
during `pnpm install` via a `prepare` step in its `package.json`.
However, this approach doesn't ensure that `web` builds correctly; stale
`unraid-ui` builds could cause false-positives.

So, instead of doing that, we call `predev` from `prebuild`, ensuring
that both local builds and the `unraid:deploy` workflow lazily get the
correct build of `unraid-ui`.
2025-12-18 11:50:17 -05:00
Pujit Mehrotra
277ac42046 feat: replace docker overview table with web component (7.3+) (#1764)
## Summary

Introduces a new Vue-based Docker container management interface
replacing the legacy webgui table.

### Container Management
- Start, stop, pause, resume, and remove containers via GraphQL
mutations
- Bulk actions for managing multiple containers at once
- Container update detection with one-click updates
- Real-time container statistics (CPU, memory, I/O)

### Organization & Navigation
- Folder-based container organization with drag-and-drop support
- Accessible reordering via keyboard controls
- Customizable column visibility with persistent preferences
- Column resizing and reordering
- Filtering and search across container properties

### Auto-start Configuration
- Dedicated autostart view with delay configuration
- Drag-and-drop reordering of start/stop sequences

### Logs & Console
- Integrated log viewer with filtering and download
- Persistent console sessions with shell selection
- Slideover panel for quick access

### Networking
- Port conflict detection and alerts
- Tailscale integration for container networking status
- LAN IP and port information display

### Additional Features
- Orphaned container detection and cleanup
- Template mapping management
- Critical notifications system
- WebUI visit links with Tailscale support

<sub>PR Summary by Claude Opus 4.5</sub>
2025-12-18 11:11:05 -05:00
Pujit Mehrotra
e1e3ea7eb6 fix: update myservers config references to connect config references (#1810)
`myservers.cfg` no longer gets written to or read (except for migration
purposes), so it'd be better to read from the new values instead of
continuing to use the old ones @elibosley @Squidly271 .

unless i'm missing something! see #1805

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Switches to a centralized remote-access configuration with a legacy
fallback and richer client-side handling.
* Optional GraphQL submission path for applying remote settings when
available.

* **Bug Fixes**
* Normalized boolean and port handling to prevent incorrect values
reaching the UI.
* Improved error handling and UI state restoration during save/apply
flows.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-18 10:34:06 -05:00
Pujit Mehrotra
8b155d1f1c fix: handle race condition between guid loading and license check (#1847)
On errors, a `console.error` message should be emitted from the browser
console, tagged `[ReplaceCheck.check]`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added retry capability for license eligibility checks with a
contextual "Retry" button that appears in error states.

* **Bug Fixes**
* Fixed license status initialization to correctly default to ready
state.
* Enhanced error messaging with specific messages for different failure
scenarios (missing credentials, access denied, server errors).
* Improved status display handling to prevent potential runtime errors.

* **Localization**
  * Added "Retry" text translation.

* **Tests**
  * Updated and added tests for reset functionality and error handling.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-18 08:51:01 -05:00
github-actions[bot]
d13a1f6174 chore(main): release 4.28.2 (#1845)
🤖 I have created a release *beep* *boop*
---


## [4.28.2](https://github.com/unraid/api/compare/v4.28.1...v4.28.2)
(2025-12-16)


### Bug Fixes

* **api:** timeout on startup on 7.0 and 6.12
([#1844](https://github.com/unraid/api/issues/1844))
([e243ae8](e243ae836e))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-16 11:47:31 -05:00
Eli Bosley
e243ae836e fix(api): timeout on startup on 7.0 and 6.12 (#1844)
Updated the total startup budget, bootstrap reserved time, and maximum
operation timeout values to enhance API startup reliability. The total
startup budget is now set to 30 seconds, with 20 seconds reserved for
bootstrap and a maximum operation timeout of 5 seconds.
2025-12-16 11:37:42 -05:00
github-actions[bot]
01a63fd86b chore(main): release 4.28.1 (#1843)
🤖 I have created a release *beep* *boop*
---


## [4.28.1](https://github.com/unraid/api/compare/v4.28.0...v4.28.1)
(2025-12-16)


### Bug Fixes

* empty commit to release as 4.28.1
([df78608](df78608457))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-16 11:02:11 -05:00
Eli Bosley
df78608457 fix: empty commit to release as 4.28.1 2025-12-16 10:35:12 -05:00
github-actions[bot]
ca3bee4ad5 chore(main): release 4.28.0 (#1807)
🤖 I have created a release *beep* *boop*
---


## [4.28.0](https://github.com/unraid/api/compare/v4.27.2...v4.28.0)
(2025-12-15)


### Features

* when cancelling OS upgrade, delete any plugin files that were d…
([#1823](https://github.com/unraid/api/issues/1823))
([74df938](74df938e45))


### Bug Fixes

* change keyfile watcher to poll instead of inotify on FAT32
([#1820](https://github.com/unraid/api/issues/1820))
([23a7120](23a71207dd))
* enhance dark mode support in theme handling
([#1808](https://github.com/unraid/api/issues/1808))
([d6e2939](d6e29395c8))
* improve API startup reliability with timeout budget tracking
([#1824](https://github.com/unraid/api/issues/1824))
([51f025b](51f025b105))
* PHP Warnings in Management Settings
([#1805](https://github.com/unraid/api/issues/1805))
([832e9d0](832e9d04f2))
* update @unraid/shared-callbacks to version 3.0.0
([#1831](https://github.com/unraid/api/issues/1831))
([73b2ce3](73b2ce360c))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-15 16:35:33 -05:00
Jandrop
024ae69343 fix(ups): convert estimatedRuntime from minutes to seconds (#1822)
## Summary

Fixes the `estimatedRuntime` field in the UPS GraphQL query to return
values in **seconds** as documented, instead of **minutes**.

## Problem

The `TIMELEFT` value from `apcupsd` is returned in minutes (e.g., `6.0`
for 6 minutes), but the GraphQL schema documentation states:

> Estimated runtime remaining on battery power. **Unit: seconds**.
Example: 3600 means 1 hour of runtime remaining

Currently, the API returns `6` (minutes) instead of `360` (seconds).

## Solution

Convert the `TIMELEFT` value from minutes to seconds by multiplying by
60:

```typescript
// Before
estimatedRuntime: parseInt(upsData.TIMELEFT || '3600', 10),

// After
estimatedRuntime: Math.round(parseFloat(upsData.TIMELEFT || '60') * 60),
```

## Testing

1. Query `upsDevices` before the fix → `estimatedRuntime: 6` (incorrect
- minutes)
2. Query `upsDevices` after the fix → `estimatedRuntime: 360` (correct -
seconds)

Tested on Unraid server with APC UPS connected via apcupsd.

## Related Issues

Fixes #1821

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Corrected UPS battery runtime calculation to interpret provider
TIMELEFT as minutes, convert to seconds, and use a sensible default when
missing—improves displayed battery runtime accuracy.
* **Tests**
* Updated UPS test fixtures to match the minute-based TIMELEFT format
used by the UPS provider.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-15 16:28:33 -05:00
Pujit Mehrotra
99ce88bfdc fix(plg): explicitly stop an existing api before installation (#1841)
Necessary for "clean" upgrades to api orchestration (eg changing how the
api is daemonized).

Prior to this, `rc.unraid-api start` would also restart a running api,
which sufficed for application updates, but is insufficient for
orchestration updates.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved update reliability by ensuring services are properly stopped
before system modifications occur.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-15 16:27:51 -05:00
Eli Bosley
73b2ce360c fix: update @unraid/shared-callbacks to version 3.0.0 (#1831)
…on and pnpm-lock.yaml

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a standalone redirect page that shows "Redirecting..." and
navigates automatically.

* **Improvements**
* Redirect preserves hash callback data, validates targets, and logs the
computed redirect.
  * Purchase callback origin changed to a different account host.
* Date/time formatting now tolerates missing or empty server formats
with safe fallbacks.
  * Redirect page included in backup/restore.

* **Tests**
  * Added tests covering date/time formatting fallbacks.

* **Chores**
  * Dependency @unraid/shared-callbacks upgraded.
  * Removed multiple demo/debug pages and related test UIs.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-15 16:20:18 -05:00
Eli Bosley
d6e29395c8 fix: enhance dark mode support in theme handling (#1808)
- Added PHP logic to determine if the current theme is dark and set a
CSS variable accordingly.
- Introduced a new function to retrieve the dark mode state from the CSS
variable in JavaScript.
- Updated the theme store to initialize dark mode based on the CSS
variable, ensuring consistent theme application across the application.

This improves user experience by ensuring the correct theme is applied
based on user preferences.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Server-persisted theme mutation and client action to fetch/apply
themes

* **Improvements**
* Safer theme parsing and multi-source initialization (CSS var, storage,
cookie, server)
* Robust dark-mode detection and propagation across document, modals and
teleport containers
* Responsive banner/header gradient handling with tunable CSS variables
and fallbacks

* **Tests**
* Expanded tests for theme flows, dark-mode detection, banner gradients
and manifest robustness

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-15 12:52:47 -05:00
Eli Bosley
317e0fa307 Revert "feat!(api): swap daemonizer to nodemon instead of PM2" (#1836)
Reverts unraid/api#1798
2025-12-12 18:32:35 -05:00
renovate[bot]
331c913329 chore(deps): update actions/checkout action to v6 (#1832)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [actions/checkout](https://redirect.github.com/actions/checkout) |
action | major | `v5` -> `v6` |

---

### Release Notes

<details>
<summary>actions/checkout (actions/checkout)</summary>

### [`v6`](https://redirect.github.com/actions/checkout/compare/v5...v6)

[Compare
Source](https://redirect.github.com/actions/checkout/compare/v5...v6)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/unraid/api).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi40Mi4yIiwidXBkYXRlZEluVmVyIjoiNDIuNDIuMiIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-12 15:26:20 -05:00
renovate[bot]
abf3461348 chore(deps): update actions/setup-node action to v6 (#1833)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [actions/setup-node](https://redirect.github.com/actions/setup-node) |
action | major | `v5` -> `v6` |
| [actions/setup-node](https://redirect.github.com/actions/setup-node) |
action | major | `v4` -> `v6` |

---

### Release Notes

<details>
<summary>actions/setup-node (actions/setup-node)</summary>

###
[`v6`](https://redirect.github.com/actions/setup-node/compare/v5...v6)

[Compare
Source](https://redirect.github.com/actions/setup-node/compare/v5...v6)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/unraid/api).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi40Mi4yIiwidXBkYXRlZEluVmVyIjoiNDIuNDIuMiIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-12 15:26:09 -05:00
renovate[bot]
079a09ec90 chore(deps): update github artifact actions (major) (#1834)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[actions/download-artifact](https://redirect.github.com/actions/download-artifact)
| action | major | `v5` -> `v7` |
|
[actions/upload-artifact](https://redirect.github.com/actions/upload-artifact)
| action | major | `v4` -> `v6` |

---

### Release Notes

<details>
<summary>actions/download-artifact (actions/download-artifact)</summary>

###
[`v7`](https://redirect.github.com/actions/download-artifact/compare/v6...v7)

[Compare
Source](https://redirect.github.com/actions/download-artifact/compare/v6...v7)

###
[`v6`](https://redirect.github.com/actions/download-artifact/compare/v5...v6)

[Compare
Source](https://redirect.github.com/actions/download-artifact/compare/v5...v6)

</details>

<details>
<summary>actions/upload-artifact (actions/upload-artifact)</summary>

###
[`v6`](https://redirect.github.com/actions/upload-artifact/compare/v5...v6)

[Compare
Source](https://redirect.github.com/actions/upload-artifact/compare/v5...v6)

###
[`v5`](https://redirect.github.com/actions/upload-artifact/compare/v4...v5)

[Compare
Source](https://redirect.github.com/actions/upload-artifact/compare/v4...v5)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

👻 **Immortal**: This PR will be recreated if closed unmerged. Get
[config
help](https://redirect.github.com/renovatebot/renovate/discussions) if
that's undesired.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/unraid/api).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi40Mi4yIiwidXBkYXRlZEluVmVyIjoiNDIuNDIuMiIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-12 15:25:51 -05:00
renovate[bot]
e4223ab5a1 chore(deps): update github/codeql-action action to v4 (#1835)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[github/codeql-action](https://redirect.github.com/github/codeql-action)
| action | major | `v3` -> `v4` |

---

### Release Notes

<details>
<summary>github/codeql-action (github/codeql-action)</summary>

###
[`v4`](https://redirect.github.com/github/codeql-action/compare/v3...v4)

[Compare
Source](https://redirect.github.com/github/codeql-action/compare/v3...v4)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/unraid/api).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi40Mi4yIiwidXBkYXRlZEluVmVyIjoiNDIuNDIuMiIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-12 15:25:41 -05:00
Eli Bosley
6f54206a4a feat!(api): swap daemonizer to nodemon instead of PM2 (#1798)
## Summary
- ensure the API release build copies nodemon.json into the packaged
artifacts so nodemon-managed deployments have the config available

## Testing
- pnpm --filter @unraid/api lint:fix

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_e_691e1f4bde3483238726478f6fb2d52a)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
  * Switch to Nodemon for process management and updated CLI to use it.
  * Added boot-time diagnostic logging and direct log-file writing.
  * New per-package CPU telemetry and topology exposure.

* **Bug Fixes**
  * More reliable process health detection and lifecycle handling.
  * Improved log handling and startup robustness.

* **Chores**
  * Removed PM2-related components and tests; migrated to Nodemon.
  * Consolidated pub/sub channel usage and bumped internal version.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Pujit Mehrotra <pujit@lime-technology.com>
2025-12-11 15:42:05 -05:00
Eli Bosley
e35bcc72f1 chore: Handle build number generation on forks (#1829)
## Summary
- guard build number generation to the main repository and allow
failures without stopping the workflow
- add a fallback build number derived from the GitHub run number when
the tag-based number cannot be created

## Testing
- not run (workflow-only change)


------
[Codex
Task](https://chatgpt.com/codex/tasks/task_e_693894fb808c8323a3ee51e47fe5d772)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Improved build pipeline reliability with enhanced fallback mechanisms
to ensure consistent artifact generation.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-09 17:34:45 -05:00
249 changed files with 21040 additions and 3519 deletions

View File

@@ -32,13 +32,13 @@ jobs:
name: Build API
runs-on: ubuntu-latest
outputs:
build_number: ${{ steps.buildnumber.outputs.build_number }}
build_number: ${{ steps.buildnumber.outputs.build_number || steps.fallback_buildnumber.outputs.build_number }}
defaults:
run:
working-directory: api
steps:
- name: Checkout repo
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
ref: ${{ inputs.ref || github.ref }}
fetch-depth: 0
@@ -49,7 +49,7 @@ jobs:
run_install: false
- name: Install Node
uses: actions/setup-node@v5
uses: actions/setup-node@v6
with:
node-version-file: ".nvmrc"
cache: 'pnpm'
@@ -81,18 +81,25 @@ jobs:
- name: Generate build number
id: buildnumber
if: github.repository == 'unraid/api'
continue-on-error: true
uses: onyxmueller/build-tag-number@v1
with:
token: ${{ secrets.UNRAID_BOT_GITHUB_ADMIN_TOKEN || github.token }}
prefix: ${{ inputs.version_override || steps.vars.outputs.PACKAGE_LOCK_VERSION }}
- name: Generate fallback build number
id: fallback_buildnumber
if: steps.buildnumber.outcome != 'success'
run: echo "build_number=${GITHUB_RUN_NUMBER}" >> $GITHUB_OUTPUT
- name: Build
run: |
pnpm run build:release
tar -czf deploy/unraid-api.tgz -C deploy/pack/ .
- name: Upload tgz to Github artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: unraid-api
path: ${{ github.workspace }}/api/deploy/unraid-api.tgz
@@ -105,7 +112,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
ref: ${{ inputs.ref || github.ref }}
@@ -115,7 +122,7 @@ jobs:
run_install: false
- name: Install Node
uses: actions/setup-node@v5
uses: actions/setup-node@v6
with:
node-version-file: ".nvmrc"
cache: 'pnpm'
@@ -138,7 +145,7 @@ jobs:
run: pnpm run build:wc
- name: Upload Artifact to Github
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: unraid-wc-ui
path: unraid-ui/dist-wc/
@@ -151,7 +158,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
ref: ${{ inputs.ref || github.ref }}
@@ -169,7 +176,7 @@ jobs:
run_install: false
- name: Install Node
uses: actions/setup-node@v5
uses: actions/setup-node@v6
with:
node-version-file: ".nvmrc"
cache: 'pnpm'
@@ -194,7 +201,7 @@ jobs:
run: pnpm run build
- name: Upload build to Github artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: unraid-wc-rich
path: web/dist

View File

@@ -56,7 +56,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
ref: ${{ inputs.ref }}
fetch-depth: 0
@@ -67,7 +67,7 @@ jobs:
run_install: false
- name: Install Node
uses: actions/setup-node@v5
uses: actions/setup-node@v6
with:
node-version-file: ".nvmrc"
cache: 'pnpm'
@@ -101,19 +101,19 @@ jobs:
pnpm install --frozen-lockfile --filter @unraid/connect-plugin
- name: Download Unraid UI Components
uses: actions/download-artifact@v5
uses: actions/download-artifact@v7
with:
name: unraid-wc-ui
path: ${{ github.workspace }}/plugin/source/dynamix.unraid.net/usr/local/emhttp/plugins/dynamix.my.servers/unraid-components/uui
merge-multiple: true
- name: Download Unraid Web Components
uses: actions/download-artifact@v5
uses: actions/download-artifact@v7
with:
pattern: unraid-wc-rich
path: ${{ github.workspace }}/plugin/source/dynamix.unraid.net/usr/local/emhttp/plugins/dynamix.my.servers/unraid-components/standalone
merge-multiple: true
- name: Download Unraid API
uses: actions/download-artifact@v5
uses: actions/download-artifact@v7
with:
name: unraid-api
path: ${{ github.workspace }}/plugin/api/
@@ -142,7 +142,7 @@ jobs:
fi
- name: Upload to GHA
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: unraid-plugin-${{ github.run_id }}-${{ inputs.RELEASE_TAG }}
path: plugin/deploy/

View File

@@ -24,17 +24,17 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@v4
with:
languages: ${{ matrix.language }}
config-file: ./.github/codeql/codeql-config.yml
queries: +security-and-quality
- name: Autobuild
uses: github/codeql-action/autobuild@v3
uses: github/codeql-action/autobuild@v4
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@v4

View File

@@ -20,7 +20,7 @@ jobs:
name: Deploy Storybook
steps:
- name: Checkout
uses: actions/checkout@v5
uses: actions/checkout@v6
- uses: pnpm/action-setup@v4
name: Install pnpm
@@ -28,7 +28,7 @@ jobs:
run_install: false
- name: Setup Node.js
uses: actions/setup-node@v5
uses: actions/setup-node@v6
with:
node-version-file: ".nvmrc"
cache: 'pnpm'

View File

@@ -31,14 +31,14 @@ jobs:
release_notes: ${{ steps.generate_notes.outputs.release_notes }}
steps:
- name: Checkout repo
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
ref: ${{ inputs.target_commitish || github.ref }}
fetch-depth: 0
token: ${{ secrets.UNRAID_BOT_GITHUB_ADMIN_TOKEN }}
- name: Setup Node.js
uses: actions/setup-node@v4
uses: actions/setup-node@v6
with:
node-version: '20'

View File

@@ -23,7 +23,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0
@@ -33,7 +33,7 @@ jobs:
run_install: false
- name: Install Node
uses: actions/setup-node@v5
uses: actions/setup-node@v6
with:
node-version-file: ".nvmrc"
cache: 'pnpm'
@@ -177,7 +177,7 @@ jobs:
pull-requests: write
steps:
- name: Checkout
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0

View File

@@ -31,14 +31,14 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
ref: ${{ inputs.target_commitish || github.ref }}
fetch-depth: 0
token: ${{ secrets.UNRAID_BOT_GITHUB_ADMIN_TOKEN }}
- name: Setup Node.js
uses: actions/setup-node@v4
uses: actions/setup-node@v6
with:
node-version: '20'
@@ -167,7 +167,7 @@ jobs:
release_notes: ${{ needs.generate-release-notes.outputs.release_notes }}
steps:
- name: Checkout repo
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
ref: ${{ inputs.target_commitish || github.ref }}
fetch-depth: 0

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Install Apollo Rover CLI
run: |

View File

@@ -28,7 +28,7 @@ jobs:
with:
latest: true
prerelease: false
- uses: actions/setup-node@v5
- uses: actions/setup-node@v6
with:
node-version: 22.19.0
- run: |

View File

@@ -1 +1 @@
{".":"4.27.2"}
{".":"4.29.0"}

View File

@@ -63,15 +63,6 @@
*/
.unapi {
--color-alpha: #1c1b1b;
--color-beta: #f2f2f2;
--color-gamma: #999999;
--color-gamma-opaque: rgba(153, 153, 153, 0.5);
--color-customgradient-start: rgba(242, 242, 242, 0);
--color-customgradient-end: rgba(242, 242, 242, 0.85);
--shadow-beta: 0 25px 50px -12px rgba(242, 242, 242, 0.15);
--ring-offset-shadow: 0 0 var(--color-beta);
--ring-shadow: 0 0 var(--color-beta);
}
.unapi button:not(:disabled),

View File

@@ -11,6 +11,11 @@
--color-beta: #1c1b1b;
--color-gamma: #ffffff;
--color-gamma-opaque: rgba(255, 255, 255, 0.3);
--color-header-gradient-start: color-mix(in srgb, var(--header-background-color) 0%, transparent);
--color-header-gradient-end: color-mix(in srgb, var(--header-background-color) 100%, transparent);
--shadow-beta: 0 25px 50px -12px color-mix(in srgb, var(--color-beta) 15%, transparent);
--ring-offset-shadow: 0 0 var(--color-beta);
--ring-shadow: 0 0 var(--color-beta);
}
/* Black Theme */
@@ -21,15 +26,26 @@
--color-beta: #f2f2f2;
--color-gamma: #1c1b1b;
--color-gamma-opaque: rgba(28, 27, 27, 0.3);
--color-header-gradient-start: color-mix(in srgb, var(--header-background-color) 0%, transparent);
--color-header-gradient-end: color-mix(in srgb, var(--header-background-color) 100%, transparent);
--shadow-beta: 0 25px 50px -12px color-mix(in srgb, var(--color-beta) 15%, transparent);
--ring-offset-shadow: 0 0 var(--color-beta);
--ring-shadow: 0 0 var(--color-beta);
}
/* Gray Theme */
.Theme--gray {
.Theme--gray,
.Theme--gray.dark {
--color-border: #383735;
--color-alpha: #ff8c2f;
--color-beta: #383735;
--color-gamma: #ffffff;
--color-gamma-opaque: rgba(255, 255, 255, 0.3);
--color-header-gradient-start: color-mix(in srgb, var(--header-background-color) 0%, transparent);
--color-header-gradient-end: color-mix(in srgb, var(--header-background-color) 100%, transparent);
--shadow-beta: 0 25px 50px -12px color-mix(in srgb, var(--color-beta) 15%, transparent);
--ring-offset-shadow: 0 0 var(--color-beta);
--ring-shadow: 0 0 var(--color-beta);
}
/* Azure Theme */
@@ -39,6 +55,11 @@
--color-beta: #e7f2f8;
--color-gamma: #336699;
--color-gamma-opaque: rgba(51, 102, 153, 0.3);
--color-header-gradient-start: color-mix(in srgb, var(--header-background-color) 0%, transparent);
--color-header-gradient-end: color-mix(in srgb, var(--header-background-color) 100%, transparent);
--shadow-beta: 0 25px 50px -12px color-mix(in srgb, var(--color-beta) 15%, transparent);
--ring-offset-shadow: 0 0 var(--color-beta);
--ring-shadow: 0 0 var(--color-beta);
}
/* Dark Mode Overrides */

View File

@@ -19,6 +19,7 @@ PATHS_LOGS_FILE=./dev/log/graphql-api.log
PATHS_CONNECT_STATUS_FILE_PATH=./dev/connectStatus.json # Connect plugin status file
PATHS_OIDC_JSON=./dev/configs/oidc.local.json
PATHS_LOCAL_SESSION_FILE=./dev/local-session
PATHS_DOCKER_TEMPLATES=./dev/docker-templates
ENVIRONMENT="development"
NODE_ENV="development"
PORT="3001"

View File

@@ -3,3 +3,4 @@ NODE_ENV="production"
PORT="/var/run/unraid-api.sock"
MOTHERSHIP_GRAPHQL_LINK="https://mothership.unraid.net/ws"
PATHS_CONFIG_MODULES="/boot/config/plugins/dynamix.my.servers/configs"
ENABLE_NEXT_DOCKER_RELEASE=true

View File

@@ -3,3 +3,4 @@ NODE_ENV="production"
PORT="/var/run/unraid-api.sock"
MOTHERSHIP_GRAPHQL_LINK="https://staging.mothership.unraid.net/ws"
PATHS_CONFIG_MODULES="/boot/config/plugins/dynamix.my.servers/configs"
ENABLE_NEXT_DOCKER_RELEASE=true

View File

@@ -8,7 +8,7 @@ export default tseslint.config(
eslint.configs.recommended,
...tseslint.configs.recommended,
{
ignores: ['src/graphql/generated/client/**/*', 'src/**/**/dummy-process.js'],
ignores: ['src/graphql/generated/client/**/*', 'src/**/**/dummy-process.js', 'dist/**/*'],
},
{
plugins: {

6
api/.gitignore vendored
View File

@@ -83,6 +83,8 @@ deploy/*
!**/*.login.*
# Local Development Artifacts
# local api configs - don't need project-wide tracking
dev/connectStatus.json
dev/configs/*
@@ -96,3 +98,7 @@ dev/configs/oidc.local.json
# local api keys
dev/keys/*
# mock docker templates
dev/docker-templates
# ie unraid notifications
dev/notifications

View File

@@ -5,3 +5,4 @@ src/unraid-api/unraid-file-modifier/modifications/__fixtures__/downloaded/*
# Generated Types
src/graphql/generated/client/*.ts
dist/

View File

@@ -1,5 +1,51 @@
# Changelog
## [4.29.0](https://github.com/unraid/api/compare/v4.28.2...v4.29.0) (2025-12-19)
### Features
* replace docker overview table with web component (7.3+) ([#1764](https://github.com/unraid/api/issues/1764)) ([277ac42](https://github.com/unraid/api/commit/277ac420464379e7ee6739c4530271caf7717503))
### Bug Fixes
* handle race condition between guid loading and license check ([#1847](https://github.com/unraid/api/issues/1847)) ([8b155d1](https://github.com/unraid/api/commit/8b155d1f1c99bb19efbc9614e000d852e9f0c12d))
* resolve issue with "Continue" button when updating ([#1852](https://github.com/unraid/api/issues/1852)) ([d099e75](https://github.com/unraid/api/commit/d099e7521d2062bb9cf84f340e46b169dd2492c5))
* update myservers config references to connect config references ([#1810](https://github.com/unraid/api/issues/1810)) ([e1e3ea7](https://github.com/unraid/api/commit/e1e3ea7eb68cc6840f67a8aec937fd3740e75b28))
## [4.28.2](https://github.com/unraid/api/compare/v4.28.1...v4.28.2) (2025-12-16)
### Bug Fixes
* **api:** timeout on startup on 7.0 and 6.12 ([#1844](https://github.com/unraid/api/issues/1844)) ([e243ae8](https://github.com/unraid/api/commit/e243ae836ec1a7fde37dceeb106cc693b20ec82b))
## [4.28.1](https://github.com/unraid/api/compare/v4.28.0...v4.28.1) (2025-12-16)
### Bug Fixes
* empty commit to release as 4.28.1 ([df78608](https://github.com/unraid/api/commit/df786084572eefb82e086c15939b50cc08b9db10))
## [4.28.0](https://github.com/unraid/api/compare/v4.27.2...v4.28.0) (2025-12-15)
### Features
* when cancelling OS upgrade, delete any plugin files that were d… ([#1823](https://github.com/unraid/api/issues/1823)) ([74df938](https://github.com/unraid/api/commit/74df938e450def2ee3e2864d4b928f53a68e9eb8))
### Bug Fixes
* change keyfile watcher to poll instead of inotify on FAT32 ([#1820](https://github.com/unraid/api/issues/1820)) ([23a7120](https://github.com/unraid/api/commit/23a71207ddde221867562b722f4e65a5fc4dd744))
* enhance dark mode support in theme handling ([#1808](https://github.com/unraid/api/issues/1808)) ([d6e2939](https://github.com/unraid/api/commit/d6e29395c8a8b0215d4f5945775de7fa358d06ec))
* improve API startup reliability with timeout budget tracking ([#1824](https://github.com/unraid/api/issues/1824)) ([51f025b](https://github.com/unraid/api/commit/51f025b105487b178048afaabf46b260c4a7f9c1))
* PHP Warnings in Management Settings ([#1805](https://github.com/unraid/api/issues/1805)) ([832e9d0](https://github.com/unraid/api/commit/832e9d04f207d3ec612c98500a2ffc86659264e5))
* **plg:** explicitly stop an existing api before installation ([#1841](https://github.com/unraid/api/issues/1841)) ([99ce88b](https://github.com/unraid/api/commit/99ce88bfdc0a7f020c42f2fe0c6a0f4e32ac8f5a))
* update @unraid/shared-callbacks to version 3.0.0 ([#1831](https://github.com/unraid/api/issues/1831)) ([73b2ce3](https://github.com/unraid/api/commit/73b2ce360c66cd9bedc138a5f8306af04b6bde77))
* **ups:** convert estimatedRuntime from minutes to seconds ([#1822](https://github.com/unraid/api/issues/1822)) ([024ae69](https://github.com/unraid/api/commit/024ae69343bad5a3cbc19f80e357082e9b2efc1e))
## [4.27.2](https://github.com/unraid/api/compare/v4.27.1...v4.27.2) (2025-11-21)

View File

@@ -75,6 +75,16 @@ If you found this file you're likely a developer. If you'd like to know more abo
- Run `pnpm --filter @unraid/api i18n:extract` to scan the Nest.js source for translation helper usages and update `src/i18n/en.json` with any new keys. The extractor keeps existing translations intact and appends new keys with their English source text.
## Developer Documentation
For detailed information about specific features:
- [API Plugins](docs/developer/api-plugins.md) - Working with API plugins and workspace packages
- [Docker Feature](docs/developer/docker.md) - Container management, GraphQL API, and WebGUI integration
- [Feature Flags](docs/developer/feature-flags.md) - Conditionally enabling functionality
- [Repository Organization](docs/developer/repo-organization.md) - Codebase structure
- [Development Workflows](docs/developer/workflows.md) - Development processes
## License
Copyright Lime Technology Inc. All rights reserved.

View File

@@ -1,5 +1,5 @@
{
"version": "4.27.2",
"version": "4.28.2",
"extraOrigins": [],
"sandbox": true,
"ssoSubIds": [],

View File

@@ -0,0 +1,555 @@
# Docker Feature
The Docker feature provides complete container management for Unraid through a GraphQL API, including lifecycle operations, real-time monitoring, update detection, and organizational tools.
## Table of Contents
- [Overview](#overview)
- [Architecture](#architecture)
- [Module Structure](#module-structure)
- [Data Flow](#data-flow)
- [Core Services](#core-services)
- [DockerService](#dockerservice)
- [DockerNetworkService](#dockernetworkservice)
- [DockerPortService](#dockerportservice)
- [DockerLogService](#dockerlogservice)
- [DockerStatsService](#dockerstatsservice)
- [DockerAutostartService](#dockerautostartservice)
- [DockerConfigService](#dockerconfigservice)
- [DockerManifestService](#dockermanifestservice)
- [DockerPhpService](#dockerphpservice)
- [DockerTailscaleService](#dockertailscaleservice)
- [DockerTemplateScannerService](#dockertemplatescannerservice)
- [DockerOrganizerService](#dockerorganizerservice)
- [GraphQL API](#graphql-api)
- [Queries](#queries)
- [Mutations](#mutations)
- [Subscriptions](#subscriptions)
- [Data Models](#data-models)
- [DockerContainer](#dockercontainer)
- [ContainerState](#containerstate)
- [ContainerPort](#containerport)
- [DockerPortConflicts](#dockerportconflicts)
- [Caching Strategy](#caching-strategy)
- [WebGUI Integration](#webgui-integration)
- [File Modification](#file-modification)
- [PHP Integration](#php-integration)
- [Permissions](#permissions)
- [Configuration Files](#configuration-files)
- [Development](#development)
- [Adding a New Docker Service](#adding-a-new-docker-service)
- [Testing](#testing)
- [Feature Flag Testing](#feature-flag-testing)
## Overview
**Location:** `src/unraid-api/graph/resolvers/docker/`
**Feature Flag:** Many next-generation features are gated behind `ENABLE_NEXT_DOCKER_RELEASE`. See [Feature Flags](./feature-flags.md) for details on enabling.
**Key Capabilities:**
- Container lifecycle management (start, stop, pause, update, remove)
- Real-time container stats streaming
- Network and port conflict detection
- Container log retrieval
- Automatic update detection via digest comparison
- Tailscale container integration
- Container organization with folders and views
- Template-based metadata resolution
## Architecture
### Module Structure
The Docker module (`docker.module.ts`) serves as the entry point and exports:
- **13 services** for various Docker operations
- **3 resolvers** for GraphQL query/mutation/subscription handling
**Dependencies:**
- `JobModule` - Background job scheduling
- `NotificationsModule` - User notifications
- `ServicesModule` - Shared service utilities
### Data Flow
```text
Docker Daemon (Unix Socket)
dockerode library
DockerService (transform & cache)
GraphQL Resolvers
Client Applications
```
The API communicates with the Docker daemon through the `dockerode` library via Unix socket. Container data is transformed from raw Docker API format to GraphQL types, enriched with Unraid-specific metadata (templates, autostart config), and cached for performance.
## Core Services
### DockerService
**File:** `docker.service.ts`
Central orchestrator for all container operations.
**Key Methods:**
- `getContainers(skipCache?, includeSize?)` - List containers with caching
- `start(id)`, `stop(id)`, `pause(id)`, `unpause(id)` - Lifecycle operations
- `updateContainer(id)`, `updateContainers(ids)`, `updateAllContainers()` - Image updates
- `removeContainer(id, withImage?)` - Remove container and optionally its image
**Caching:**
- Cache TTL: 60 seconds (60000ms)
- Cache keys: `docker_containers`, `docker_containers_with_size`
- Invalidated automatically on mutations
### DockerNetworkService
**File:** `docker-network.service.ts`
Lists Docker networks with metadata including driver, scope, IPAM settings, and connected containers.
**Caching:** 60 seconds
### DockerPortService
**File:** `docker-port.service.ts`
Detects port conflicts between containers and with the host.
**Features:**
- Deduplicates port mappings from Docker API
- Identifies container-to-container conflicts
- Detects host-level port collisions
- Separates TCP and UDP conflicts
- Calculates LAN-accessible IP:port combinations
### DockerLogService
**File:** `docker-log.service.ts`
Retrieves container logs with configurable options.
**Parameters:**
- `tail` - Number of lines (default: 200, max: 2000)
- `since` - Timestamp filter for log entries
**Additional Features:**
- Calculates container log file sizes
- Supports timestamp-based filtering
### DockerStatsService
**File:** `docker-stats.service.ts`
Provides real-time container statistics via GraphQL subscription.
**Metrics:**
- CPU percentage
- Memory usage and limit
- Network I/O (received/transmitted bytes)
- Block I/O (read/written bytes)
**Implementation:**
- Spawns `docker stats` process with streaming output
- Publishes to `PUBSUB_CHANNEL.DOCKER_STATS`
- Auto-starts on first subscriber, stops when last disconnects
### DockerAutostartService
**File:** `docker-autostart.service.ts`
Manages container auto-start configuration.
**Features:**
- Parses auto-start file format (name + wait time per line)
- Maintains auto-start order and wait times
- Persists configuration changes
- Tracks container primary names
### DockerConfigService
**File:** `docker-config.service.ts`
Persistent configuration management using `ConfigFilePersister`.
**Configuration Options:**
- `templateMappings` - Container name to template file path mappings
- `skipTemplatePaths` - Containers excluded from template scanning
- `updateCheckCronSchedule` - Cron expression for digest refresh (default: daily at 6am)
### DockerManifestService
**File:** `docker-manifest.service.ts`
Detects available container image updates.
**Implementation:**
- Compares local and remote image SHA256 digests
- Reads cached status from `/var/lib/docker/unraid-update-status.json`
- Triggers refresh via PHP integration
### DockerPhpService
**File:** `docker-php.service.ts`
Integration with legacy Unraid PHP Docker scripts.
**PHP Scripts Used:**
- `DockerUpdate.php` - Refresh container digests
- `DockerContainers.php` - Get update statuses
**Update Statuses:**
- `UP_TO_DATE` - Container is current
- `UPDATE_AVAILABLE` - New image available
- `REBUILD_READY` - Rebuild required
- `UNKNOWN` - Status could not be determined
### DockerTailscaleService
**File:** `docker-tailscale.service.ts`
Detects and monitors Tailscale-enabled containers.
**Detection Methods:**
- Container labels indicating Tailscale
- Tailscale socket mount points
**Status Information:**
- Tailscale version and backend state
- Hostname and DNS name
- Exit node status
- Key expiry dates
**Caching:**
- Status cache: 30 seconds
- DERP map and versions: 24 hours
### DockerTemplateScannerService
**File:** `docker-template-scanner.service.ts`
Maps containers to their template files for metadata resolution.
**Bootstrap Process:**
1. Runs 5 seconds after app startup
2. Scans XML templates from configured paths
3. Parses container/image names from XML
4. Matches against running containers
5. Stores mappings in `docker.config.json`
**Template Metadata Resolved:**
- `projectUrl`, `registryUrl`, `supportUrl`
- `iconUrl`, `webUiUrl`, `shell`
- Template port mappings
**Orphaned Containers:**
Containers without matching templates are marked as "orphaned" in the API response.
### DockerOrganizerService
**File:** `organizer/docker-organizer.service.ts`
Container organization system for UI views.
**Features:**
- Hierarchical folder structure
- Multiple views with different layouts
- Position-based organization
- View-specific preferences (sorting, filtering)
## GraphQL API
### Queries
```graphql
type Query {
docker: Docker!
}
type Docker {
containers(skipCache: Boolean): [DockerContainer!]!
container(id: PrefixedID!): DockerContainer # Feature-flagged
networks(skipCache: Boolean): [DockerNetwork!]!
portConflicts(skipCache: Boolean): DockerPortConflicts!
logs(id: PrefixedID!, since: Int, tail: Int): DockerContainerLogs!
organizer(skipCache: Boolean): DockerOrganizer! # Feature-flagged
containerUpdateStatuses: [ContainerUpdateStatus!]! # Feature-flagged
}
```
### Mutations
**Container Lifecycle:**
```graphql
type Mutation {
start(id: PrefixedID!): DockerContainer!
stop(id: PrefixedID!): DockerContainer!
pause(id: PrefixedID!): DockerContainer!
unpause(id: PrefixedID!): DockerContainer!
removeContainer(id: PrefixedID!, withImage: Boolean): Boolean!
}
```
**Container Updates:**
```graphql
type Mutation {
updateContainer(id: PrefixedID!): DockerContainer!
updateContainers(ids: [PrefixedID!]!): [DockerContainer!]!
updateAllContainers: [DockerContainer!]!
refreshDockerDigests: Boolean!
}
```
**Configuration:**
```graphql
type Mutation {
updateAutostartConfiguration(
entries: [AutostartEntry!]!
persistUserPreferences: Boolean
): Boolean!
syncDockerTemplatePaths: Boolean!
resetDockerTemplateMappings: Boolean!
}
```
**Organizer (Feature-flagged):**
```graphql
type Mutation {
createDockerFolder(name: String!, parentId: ID, childrenIds: [ID!]): DockerFolder!
createDockerFolderWithItems(
name: String!
parentId: ID
sourceEntryIds: [ID!]
position: Int
): DockerFolder!
setDockerFolderChildren(folderId: ID!, childrenIds: [ID!]!): DockerFolder!
deleteDockerEntries(entryIds: [ID!]!): Boolean!
moveDockerEntriesToFolder(sourceEntryIds: [ID!]!, destinationFolderId: ID!): Boolean!
moveDockerItemsToPosition(
sourceEntryIds: [ID!]!
destinationFolderId: ID!
position: Int!
): Boolean!
renameDockerFolder(folderId: ID!, newName: String!): DockerFolder!
updateDockerViewPreferences(viewId: ID!, prefs: ViewPreferencesInput!): Boolean!
}
```
### Subscriptions
```graphql
type Subscription {
dockerContainerStats: DockerContainerStats!
}
```
Real-time container statistics stream. Automatically starts when first client subscribes and stops when last client disconnects.
## Data Models
### DockerContainer
Primary container representation with 24+ fields:
```typescript
{
id: PrefixedID
names: [String!]!
image: String!
imageId: String!
state: ContainerState!
status: String!
created: Float!
// Networking
ports: [ContainerPort!]!
lanIpPorts: [ContainerPort!]!
hostConfig: ContainerHostConfig
networkSettings: DockerNetworkSettings
// Storage
sizeRootFs: Float
sizeRw: Float
sizeLog: Float
mounts: [ContainerMount!]!
// Metadata
labels: JSON
// Auto-start
autoStart: Boolean!
autoStartOrder: Int
autoStartWait: Int
// Template Integration
templatePath: String
isOrphaned: Boolean!
projectUrl: String
registryUrl: String
supportUrl: String
iconUrl: String
webUiUrl: String
shell: String
templatePorts: [ContainerPort!]
// Tailscale
tailscaleEnabled: Boolean!
tailscaleStatus: TailscaleStatus
// Updates
isUpdateAvailable: Boolean
isRebuildReady: Boolean
}
```
### ContainerState
```typescript
enum ContainerState {
RUNNING
PAUSED
EXITED
}
```
### ContainerPort
```typescript
{
ip: String
privatePort: Int!
publicPort: Int
type: String! // "tcp" or "udp"
}
```
### DockerPortConflicts
```typescript
{
containerConflicts: [DockerContainerPortConflict!]!
lanConflicts: [DockerLanPortConflict!]!
}
```
## Caching Strategy
The Docker feature uses `cache-manager` v7 for performance optimization.
**Important:** cache-manager v7 expects TTL values in **milliseconds**, not seconds.
| Cache Key | TTL | Invalidation |
|-----------|-----|--------------|
| `docker_containers` | 60s | On any container mutation |
| `docker_containers_with_size` | 60s | On any container mutation |
| `docker_networks` | 60s | On network changes |
| Tailscale status | 30s | Automatic |
| Tailscale DERP/versions | 24h | Automatic |
**Cache Invalidation Triggers:**
- `start()`, `stop()`, `pause()`, `unpause()`
- `updateContainer()`, `updateContainers()`, `updateAllContainers()`
- `removeContainer()`
- `updateAutostartConfiguration()`
## WebGUI Integration
### File Modification
**File:** `unraid-file-modifier/modifications/docker-containers-page.modification.ts`
**Target:** `/usr/local/emhttp/plugins/dynamix.docker.manager/DockerContainers.page`
When `ENABLE_NEXT_DOCKER_RELEASE` is enabled and Unraid version is 7.3.0+, the modification:
1. Replaces the legacy Docker containers page
2. Injects the Vue web component: `<unraid-docker-container-overview>`
3. Retains the `Nchan="docker_load"` page attribute (an emhttp/WebGUI feature for real-time updates, not controlled by the API)
### PHP Integration
The API integrates with legacy Unraid PHP scripts for certain operations:
- **Digest refresh:** Calls `DockerUpdate.php` to refresh container image digests
- **Update status:** Reads from `DockerContainers.php` output
## Permissions
All Docker operations are protected with permission checks:
| Operation | Resource | Action |
|-----------|----------|--------|
| Read containers/networks | `Resource.DOCKER` | `AuthAction.READ_ANY` |
| Start/stop/pause/update | `Resource.DOCKER` | `AuthAction.UPDATE_ANY` |
| Remove containers | `Resource.DOCKER` | `AuthAction.DELETE_ANY` |
## Configuration Files
| File | Purpose |
|------|---------|
| `docker.config.json` | Template mappings, skip paths, cron schedule |
| `docker.organizer.json` | Container organization tree and views |
| `/var/lib/docker/unraid-update-status.json` | Cached container update statuses |
## Development
### Adding a New Docker Service
1. Create service file in `src/unraid-api/graph/resolvers/docker/`
2. Add to `docker.module.ts` providers and exports
3. Inject into resolvers as needed
4. Add GraphQL types to `docker.model.ts` if needed
### Testing
```bash
# Run Docker-related tests
pnpm --filter ./api test -- src/unraid-api/graph/resolvers/docker/
# Run specific test file
pnpm --filter ./api test -- src/unraid-api/graph/resolvers/docker/docker.service.spec.ts
```
### Feature Flag Testing
To test next-generation Docker features locally:
```bash
ENABLE_NEXT_DOCKER_RELEASE=true unraid-api start
```
Or add to `.env`:
```env
ENABLE_NEXT_DOCKER_RELEASE=true
```

View File

@@ -62,15 +62,18 @@ To build all packages in the monorepo:
pnpm build
```
### Watch Mode Building
### Plugin Building (Docker Required)
For continuous building during development:
The plugin build requires Docker. This command automatically builds all dependencies (API, web) before starting Docker:
```bash
pnpm build:watch
cd plugin
pnpm run docker:build-and-run
# Then inside the container:
pnpm build
```
This is useful when you want to see your changes reflected without manually rebuilding. This will also allow you to install a local plugin to test your changes.
This serves the plugin at `http://YOUR_IP:5858/` for installation on your Unraid server.
### Package-Specific Building

View File

@@ -7,7 +7,7 @@
"cwd": "/usr/local/unraid-api",
"exec_mode": "fork",
"wait_ready": true,
"listen_timeout": 15000,
"listen_timeout": 30000,
"max_restarts": 10,
"min_uptime": 10000,
"watch": false,

View File

@@ -862,6 +862,38 @@ type DockerMutations {
"""Stop a container"""
stop(id: PrefixedID!): DockerContainer!
"""Pause (Suspend) a container"""
pause(id: PrefixedID!): DockerContainer!
"""Unpause (Resume) a container"""
unpause(id: PrefixedID!): DockerContainer!
"""Remove a container"""
removeContainer(id: PrefixedID!, withImage: Boolean): Boolean!
"""Update auto-start configuration for Docker containers"""
updateAutostartConfiguration(entries: [DockerAutostartEntryInput!]!, persistUserPreferences: Boolean): Boolean!
"""Update a container to the latest image"""
updateContainer(id: PrefixedID!): DockerContainer!
"""Update multiple containers to the latest images"""
updateContainers(ids: [PrefixedID!]!): [DockerContainer!]!
"""Update all containers that have available updates"""
updateAllContainers: [DockerContainer!]!
}
input DockerAutostartEntryInput {
"""Docker container identifier"""
id: PrefixedID!
"""Whether the container should auto-start"""
autoStart: Boolean!
"""Number of seconds to wait after starting the container"""
wait: Int
}
type VmMutations {
@@ -944,6 +976,23 @@ input UpdateApiKeyInput {
permissions: [AddPermissionInput!]
}
"""Customization related mutations"""
type CustomizationMutations {
"""Update the UI theme (writes dynamix.cfg)"""
setTheme(
"""Theme to apply"""
theme: ThemeName!
): Theme!
}
"""The theme name"""
enum ThemeName {
azure
black
gray
white
}
"""
Parity check related mutations, WIP, response types and functionaliy will change
"""
@@ -1042,14 +1091,6 @@ type Theme {
headerSecondaryTextColor: String
}
"""The theme name"""
enum ThemeName {
azure
black
gray
white
}
type ExplicitStatusItem {
name: String!
updateStatus: UpdateStatus!
@@ -1080,6 +1121,29 @@ enum ContainerPortType {
UDP
}
type DockerPortConflictContainer {
id: PrefixedID!
name: String!
}
type DockerContainerPortConflict {
privatePort: Port!
type: ContainerPortType!
containers: [DockerPortConflictContainer!]!
}
type DockerLanPortConflict {
lanIpPort: String!
publicPort: Port
type: ContainerPortType!
containers: [DockerPortConflictContainer!]!
}
type DockerPortConflicts {
containerPorts: [DockerContainerPortConflict!]!
lanPorts: [DockerLanPortConflict!]!
}
type ContainerHostConfig {
networkMode: String!
}
@@ -1093,8 +1157,17 @@ type DockerContainer implements Node {
created: Int!
ports: [ContainerPort!]!
"""List of LAN-accessible host:port values"""
lanIpPorts: [String!]
"""Total size of all files in the container (in bytes)"""
sizeRootFs: BigInt
"""Size of writable layer (in bytes)"""
sizeRw: BigInt
"""Size of container logs (in bytes)"""
sizeLog: BigInt
labels: JSON
state: ContainerState!
status: String!
@@ -1102,12 +1175,50 @@ type DockerContainer implements Node {
networkSettings: JSON
mounts: [JSON!]
autoStart: Boolean!
"""Zero-based order in the auto-start list"""
autoStartOrder: Int
"""Wait time in seconds applied after start"""
autoStartWait: Int
templatePath: String
"""Project/Product homepage URL"""
projectUrl: String
"""Registry/Docker Hub URL"""
registryUrl: String
"""Support page/thread URL"""
supportUrl: String
"""Icon URL"""
iconUrl: String
"""Resolved WebUI URL from template"""
webUiUrl: String
"""Shell to use for console access (from template)"""
shell: String
"""Port mappings from template (used when container is not running)"""
templatePorts: [ContainerPort!]
"""Whether the container is orphaned (no template found)"""
isOrphaned: Boolean!
isUpdateAvailable: Boolean
isRebuildReady: Boolean
"""Whether Tailscale is enabled for this container"""
tailscaleEnabled: Boolean!
"""Tailscale status for this container (fetched via docker exec)"""
tailscaleStatus(forceRefresh: Boolean = false): TailscaleStatus
}
enum ContainerState {
RUNNING
PAUSED
EXITED
}
@@ -1129,49 +1240,213 @@ type DockerNetwork implements Node {
labels: JSON!
}
type DockerContainerLogLine {
timestamp: DateTime!
message: String!
}
type DockerContainerLogs {
containerId: PrefixedID!
lines: [DockerContainerLogLine!]!
"""
Cursor that can be passed back through the since argument to continue streaming logs.
"""
cursor: DateTime
}
type DockerContainerStats {
id: PrefixedID!
"""CPU Usage Percentage"""
cpuPercent: Float!
"""Memory Usage String (e.g. 100MB / 1GB)"""
memUsage: String!
"""Memory Usage Percentage"""
memPercent: Float!
"""Network I/O String (e.g. 100MB / 1GB)"""
netIO: String!
"""Block I/O String (e.g. 100MB / 1GB)"""
blockIO: String!
}
"""Tailscale exit node connection status"""
type TailscaleExitNodeStatus {
"""Whether the exit node is online"""
online: Boolean!
"""Tailscale IPs of the exit node"""
tailscaleIps: [String!]
}
"""Tailscale status for a Docker container"""
type TailscaleStatus {
"""Whether Tailscale is online in the container"""
online: Boolean!
"""Current Tailscale version"""
version: String
"""Latest available Tailscale version"""
latestVersion: String
"""Whether a Tailscale update is available"""
updateAvailable: Boolean!
"""Configured Tailscale hostname"""
hostname: String
"""Actual Tailscale DNS name"""
dnsName: String
"""DERP relay code"""
relay: String
"""DERP relay region name"""
relayName: String
"""Tailscale IPv4 and IPv6 addresses"""
tailscaleIps: [String!]
"""Advertised subnet routes"""
primaryRoutes: [String!]
"""Whether this container is an exit node"""
isExitNode: Boolean!
"""Status of the connected exit node (if using one)"""
exitNodeStatus: TailscaleExitNodeStatus
"""Tailscale Serve/Funnel WebUI URL"""
webUiUrl: String
"""Tailscale key expiry date"""
keyExpiry: DateTime
"""Days until key expires"""
keyExpiryDays: Int
"""Whether the Tailscale key has expired"""
keyExpired: Boolean!
"""Tailscale backend state (Running, NeedsLogin, Stopped, etc.)"""
backendState: String
"""Authentication URL if Tailscale needs login"""
authUrl: String
}
type Docker implements Node {
id: PrefixedID!
containers(skipCache: Boolean! = false): [DockerContainer!]!
networks(skipCache: Boolean! = false): [DockerNetwork!]!
organizer: ResolvedOrganizerV1!
portConflicts(skipCache: Boolean! = false): DockerPortConflicts!
"""
Access container logs. Requires specifying a target container id through resolver arguments.
"""
logs(id: PrefixedID!, since: DateTime, tail: Int): DockerContainerLogs!
container(id: PrefixedID!): DockerContainer
organizer(skipCache: Boolean! = false): ResolvedOrganizerV1!
containerUpdateStatuses: [ExplicitStatusItem!]!
}
type DockerTemplateSyncResult {
scanned: Int!
matched: Int!
skipped: Int!
errors: [String!]!
}
type ResolvedOrganizerView {
id: String!
name: String!
root: ResolvedOrganizerEntry!
rootId: String!
flatEntries: [FlatOrganizerEntry!]!
prefs: JSON
}
union ResolvedOrganizerEntry = ResolvedOrganizerFolder | OrganizerContainerResource | OrganizerResource
type ResolvedOrganizerFolder {
id: String!
type: String!
name: String!
children: [ResolvedOrganizerEntry!]!
}
type OrganizerContainerResource {
id: String!
type: String!
name: String!
meta: DockerContainer
}
type OrganizerResource {
id: String!
type: String!
name: String!
meta: JSON
}
type ResolvedOrganizerV1 {
version: Float!
views: [ResolvedOrganizerView!]!
}
type FlatOrganizerEntry {
id: String!
type: String!
name: String!
parentId: String
depth: Float!
position: Float!
path: [String!]!
hasChildren: Boolean!
childrenIds: [String!]!
meta: DockerContainer
}
type NotificationCounts {
info: Int!
warning: Int!
alert: Int!
total: Int!
}
type NotificationOverview {
unread: NotificationCounts!
archive: NotificationCounts!
}
type Notification implements Node {
id: PrefixedID!
"""Also known as 'event'"""
title: String!
subject: String!
description: String!
importance: NotificationImportance!
link: String
type: NotificationType!
"""ISO Timestamp for when the notification occurred"""
timestamp: String
formattedTimestamp: String
}
enum NotificationImportance {
ALERT
INFO
WARNING
}
enum NotificationType {
UNREAD
ARCHIVE
}
type Notifications implements Node {
id: PrefixedID!
"""A cached overview of the notifications in the system & their severity."""
overview: NotificationOverview!
list(filter: NotificationFilter!): [Notification!]!
"""
Deduplicated list of unread warning and alert notifications, sorted latest first.
"""
warningsAndAlerts: [Notification!]!
}
input NotificationFilter {
importance: NotificationImportance
type: NotificationType!
offset: Int!
limit: Int!
}
type FlashBackupStatus {
"""Status message indicating the outcome of the backup initiation."""
status: String!
@@ -1772,60 +2047,6 @@ type Metrics implements Node {
memory: MemoryUtilization
}
type NotificationCounts {
info: Int!
warning: Int!
alert: Int!
total: Int!
}
type NotificationOverview {
unread: NotificationCounts!
archive: NotificationCounts!
}
type Notification implements Node {
id: PrefixedID!
"""Also known as 'event'"""
title: String!
subject: String!
description: String!
importance: NotificationImportance!
link: String
type: NotificationType!
"""ISO Timestamp for when the notification occurred"""
timestamp: String
formattedTimestamp: String
}
enum NotificationImportance {
ALERT
INFO
WARNING
}
enum NotificationType {
UNREAD
ARCHIVE
}
type Notifications implements Node {
id: PrefixedID!
"""A cached overview of the notifications in the system & their severity."""
overview: NotificationOverview!
list(filter: NotificationFilter!): [Notification!]!
}
input NotificationFilter {
importance: NotificationImportance
type: NotificationType!
offset: Int!
limit: Int!
}
type Owner {
username: String!
url: String!
@@ -2435,6 +2656,11 @@ type Mutation {
"""Marks a notification as archived."""
archiveNotification(id: PrefixedID!): Notification!
archiveNotifications(ids: [PrefixedID!]!): NotificationOverview!
"""
Creates a notification if an equivalent unread notification does not already exist.
"""
notifyIfUnique(input: NotificationData!): Notification
archiveAll(importance: NotificationImportance): NotificationOverview!
"""Marks a notification as unread."""
@@ -2449,11 +2675,22 @@ type Mutation {
vm: VmMutations!
parityCheck: ParityCheckMutations!
apiKey: ApiKeyMutations!
customization: CustomizationMutations!
rclone: RCloneMutations!
createDockerFolder(name: String!, parentId: String, childrenIds: [String!]): ResolvedOrganizerV1!
setDockerFolderChildren(folderId: String, childrenIds: [String!]!): ResolvedOrganizerV1!
deleteDockerEntries(entryIds: [String!]!): ResolvedOrganizerV1!
moveDockerEntriesToFolder(sourceEntryIds: [String!]!, destinationFolderId: String!): ResolvedOrganizerV1!
moveDockerItemsToPosition(sourceEntryIds: [String!]!, destinationFolderId: String!, position: Float!): ResolvedOrganizerV1!
renameDockerFolder(folderId: String!, newName: String!): ResolvedOrganizerV1!
createDockerFolderWithItems(name: String!, parentId: String, sourceEntryIds: [String!], position: Float): ResolvedOrganizerV1!
updateDockerViewPreferences(viewId: String = "default", prefs: JSON!): ResolvedOrganizerV1!
syncDockerTemplatePaths: DockerTemplateSyncResult!
"""
Reset Docker template mappings to defaults. Use this to recover from corrupted state.
"""
resetDockerTemplateMappings: Boolean!
refreshDockerDigests: Boolean!
"""Initiates a flash drive backup using a configured remote."""
@@ -2655,10 +2892,12 @@ input AccessUrlInput {
type Subscription {
notificationAdded: Notification!
notificationsOverview: NotificationOverview!
notificationsWarningsAndAlerts: [Notification!]!
ownerSubscription: Owner!
serversSubscription: Server!
parityHistorySubscription: ParityCheck!
arraySubscription: UnraidArray!
dockerContainerStats: DockerContainerStats!
logFile(path: String!): LogFileContent!
systemMetricsCpu: CpuUtilization!
systemMetricsCpuTelemetry: CpuPackages!

View File

@@ -12,8 +12,13 @@ default:
@deploy remote:
./scripts/deploy-dev.sh {{remote}}
# watches typescript files and restarts dev server on changes
@watch:
watchexec -e ts -r -- pnpm dev
alias b := build
alias d := deploy
alias w := watch
sync-env server:
rsync -avz --progress --stats -e ssh .env* root@{{server}}:/usr/local/unraid-api

View File

@@ -1,6 +1,6 @@
{
"name": "@unraid/api",
"version": "4.27.2",
"version": "4.29.0",
"main": "src/cli/index.ts",
"type": "module",
"corepack": {
@@ -104,6 +104,7 @@
"escape-html": "1.0.3",
"execa": "9.6.0",
"exit-hook": "4.0.0",
"fast-xml-parser": "^5.3.0",
"fastify": "5.5.0",
"filenamify": "7.0.0",
"fs-extra": "11.3.1",

View File

@@ -7,7 +7,7 @@ import { exit } from 'process';
import type { PackageJson } from 'type-fest';
import { $, cd } from 'zx';
import { getDeploymentVersion } from './get-deployment-version.js';
import { getDeploymentVersion } from '@app/../scripts/get-deployment-version.js';
type ApiPackageJson = PackageJson & {
version: string;
@@ -83,6 +83,10 @@ try {
if (parsedPackageJson.dependencies?.[dep]) {
delete parsedPackageJson.dependencies[dep];
}
// Also strip from peerDependencies (npm doesn't understand workspace: protocol)
if (parsedPackageJson.peerDependencies?.[dep]) {
delete parsedPackageJson.peerDependencies[dep];
}
});
}

View File

@@ -6,6 +6,7 @@ exports[`Returns paths 1`] = `
"unraid-api-base",
"unraid-data",
"docker-autostart",
"docker-userprefs",
"docker-socket",
"rclone-socket",
"parity-checks",

View File

@@ -11,6 +11,7 @@ test('Returns paths', async () => {
'unraid-api-base': '/usr/local/unraid-api/',
'unraid-data': expect.stringContaining('api/dev/data'),
'docker-autostart': '/var/lib/docker/unraid-autostart',
'docker-userprefs': '/boot/config/plugins/dockerMan/userprefs.cfg',
'docker-socket': '/var/run/docker.sock',
'parity-checks': expect.stringContaining('api/dev/states/parity-checks.log'),
htpasswd: '/etc/nginx/htpasswd',

View File

@@ -0,0 +1,234 @@
import { eq, gt, gte, lt, lte, parse } from 'semver';
import { describe, expect, it } from 'vitest';
import { compareVersions } from '@app/common/compare-semver-version.js';
describe('compareVersions', () => {
describe('basic comparisons', () => {
it('should return true when current version is greater than compared (gte)', () => {
const current = parse('7.3.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
it('should return true when current version equals compared (gte)', () => {
const current = parse('7.2.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
it('should return false when current version is less than compared (gte)', () => {
const current = parse('7.1.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(false);
});
it('should return true when current version is less than compared (lte)', () => {
const current = parse('7.1.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, lte)).toBe(true);
});
it('should return true when current version equals compared (lte)', () => {
const current = parse('7.2.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, lte)).toBe(true);
});
it('should return false when current version is greater than compared (lte)', () => {
const current = parse('7.3.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, lte)).toBe(false);
});
it('should return true when current version is greater than compared (gt)', () => {
const current = parse('7.3.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gt)).toBe(true);
});
it('should return false when current version equals compared (gt)', () => {
const current = parse('7.2.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gt)).toBe(false);
});
it('should return true when current version is less than compared (lt)', () => {
const current = parse('7.1.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, lt)).toBe(true);
});
it('should return false when current version equals compared (lt)', () => {
const current = parse('7.2.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, lt)).toBe(false);
});
it('should return true when versions are equal (eq)', () => {
const current = parse('7.2.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, eq)).toBe(true);
});
it('should return false when versions are not equal (eq)', () => {
const current = parse('7.3.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, eq)).toBe(false);
});
});
describe('prerelease handling - current has prerelease, compared is stable', () => {
it('should return true for gte when current prerelease > stable (same base)', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
it('should return true for gt when current prerelease > stable (same base)', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gt)).toBe(true);
});
it('should return false for lte when current prerelease < stable (same base)', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, lte)).toBe(false);
});
it('should return false for lt when current prerelease < stable (same base)', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, lt)).toBe(false);
});
it('should return false for eq when current prerelease != stable (same base)', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, eq)).toBe(false);
});
});
describe('prerelease handling - current is stable, compared has prerelease', () => {
it('should use normal comparison when current is stable and compared has prerelease', () => {
const current = parse('7.2.0')!;
const compared = parse('7.2.0-beta.1')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
it('should use normal comparison for lte when current is stable and compared has prerelease', () => {
const current = parse('7.2.0')!;
const compared = parse('7.2.0-beta.1')!;
expect(compareVersions(current, compared, lte)).toBe(false);
});
});
describe('prerelease handling - both have prerelease', () => {
it('should use normal comparison when both versions have prerelease', () => {
const current = parse('7.2.0-beta.2')!;
const compared = parse('7.2.0-beta.1')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
it('should use normal comparison for lte when both have prerelease', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0-beta.2')!;
expect(compareVersions(current, compared, lte)).toBe(true);
});
it('should use normal comparison when prerelease versions are equal', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0-beta.1')!;
expect(compareVersions(current, compared, eq)).toBe(true);
});
});
describe('prerelease handling - different base versions', () => {
it('should use normal comparison when base versions differ (current prerelease)', () => {
const current = parse('7.3.0-beta.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
it('should use normal comparison when base versions differ (current prerelease, less)', () => {
const current = parse('7.1.0-beta.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(false);
});
});
describe('includePrerelease flag', () => {
it('should apply special prerelease handling when includePrerelease is true', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte, { includePrerelease: true })).toBe(true);
});
it('should skip special prerelease handling when includePrerelease is false', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte, { includePrerelease: false })).toBe(false);
});
it('should default to includePrerelease true', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
});
describe('edge cases', () => {
it('should handle patch version differences', () => {
const current = parse('7.2.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
it('should handle minor version differences', () => {
const current = parse('7.3.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
it('should handle major version differences', () => {
const current = parse('8.0.0')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
it('should handle complex prerelease tags', () => {
const current = parse('7.2.0-beta.2.4')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
it('should handle alpha prerelease tags', () => {
const current = parse('7.2.0-alpha.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
it('should handle rc prerelease tags', () => {
const current = parse('7.2.0-rc.1')!;
const compared = parse('7.2.0')!;
expect(compareVersions(current, compared, gte)).toBe(true);
});
});
describe('comparison function edge cases', () => {
it('should handle custom comparison functions that are not gte/lte/gt/lt', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0')!;
const customCompare = (a: typeof current, b: typeof compared) => a.compare(b) === 1;
expect(compareVersions(current, compared, customCompare)).toBe(false);
});
it('should fall through to normal comparison for unknown functions with prerelease', () => {
const current = parse('7.2.0-beta.1')!;
const compared = parse('7.2.0')!;
const customCompare = () => false;
expect(compareVersions(current, compared, customCompare)).toBe(false);
});
});
});

View File

@@ -0,0 +1,44 @@
import type { SemVer } from 'semver';
import { gt, gte, lt, lte } from 'semver';
/**
* Shared version comparison logic with special handling for prerelease versions.
*
* When base versions are equal and current version has a prerelease tag while compared doesn't:
* - For gte/gt: prerelease is considered greater than stable (returns true)
* - For lte/lt: prerelease is considered less than stable (returns false)
* - For eq: prerelease is not equal to stable (returns false)
*
* @param currentVersion - The current Unraid version (SemVer object)
* @param comparedVersion - The version to compare against (SemVer object)
* @param compareFn - The comparison function (e.g., gte, lte, lt, gt, eq)
* @param includePrerelease - Whether to include special prerelease handling
* @returns The result of the comparison
*/
export const compareVersions = (
currentVersion: SemVer,
comparedVersion: SemVer,
compareFn: (a: SemVer, b: SemVer) => boolean,
{ includePrerelease = true }: { includePrerelease?: boolean } = {}
): boolean => {
if (includePrerelease) {
const baseCurrent = `${currentVersion.major}.${currentVersion.minor}.${currentVersion.patch}`;
const baseCompared = `${comparedVersion.major}.${comparedVersion.minor}.${comparedVersion.patch}`;
if (baseCurrent === baseCompared) {
const currentHasPrerelease = currentVersion.prerelease.length > 0;
const comparedHasPrerelease = comparedVersion.prerelease.length > 0;
if (currentHasPrerelease && !comparedHasPrerelease) {
if (compareFn === gte || compareFn === gt) {
return true;
}
if (compareFn === lte || compareFn === lt) {
return false;
}
}
}
}
return compareFn(currentVersion, comparedVersion);
};

View File

@@ -0,0 +1,60 @@
import type { SemVer } from 'semver';
import { coerce } from 'semver';
import { compareVersions } from '@app/common/compare-semver-version.js';
import { fileExistsSync } from '@app/core/utils/files/file-exists.js';
import { parseConfig } from '@app/core/utils/misc/parse-config.js';
type UnraidVersionIni = {
version?: string;
};
/**
* Synchronously reads the Unraid version from /etc/unraid-version
* @returns The Unraid version string, or 'unknown' if the file cannot be read
*/
export const getUnraidVersionSync = (): string => {
const versionPath = '/etc/unraid-version';
if (!fileExistsSync(versionPath)) {
return 'unknown';
}
try {
const versionIni = parseConfig<UnraidVersionIni>({ filePath: versionPath, type: 'ini' });
return versionIni.version || 'unknown';
} catch {
return 'unknown';
}
};
/**
* Compares the Unraid version against a specified version using a comparison function
* @param compareFn - The comparison function from semver (e.g., lt, gte, lte, gt, eq)
* @param version - The version to compare against (e.g., '7.3.0')
* @param options - Options for the comparison
* @returns The result of the comparison, or false if the version cannot be determined
*/
export const compareUnraidVersionSync = (
compareFn: (a: SemVer, b: SemVer) => boolean,
version: string,
{ includePrerelease = true }: { includePrerelease?: boolean } = {}
): boolean => {
const currentVersion = getUnraidVersionSync();
if (currentVersion === 'unknown') {
return false;
}
try {
const current = coerce(currentVersion, { includePrerelease });
const compared = coerce(version, { includePrerelease });
if (!current || !compared) {
return false;
}
return compareVersions(current, compared, compareFn, { includePrerelease });
} catch {
return false;
}
};

View File

@@ -1,7 +1,7 @@
import pino from 'pino';
import pretty from 'pino-pretty';
import { API_VERSION, LOG_LEVEL, LOG_TYPE, SUPPRESS_LOGS } from '@app/environment.js';
import { API_VERSION, LOG_LEVEL, LOG_TYPE, PATHS_LOGS_FILE, SUPPRESS_LOGS } from '@app/environment.js';
export const levels = ['trace', 'debug', 'info', 'warn', 'error', 'fatal'] as const;
@@ -15,18 +15,24 @@ const nullDestination = pino.destination({
},
});
const LOG_TRANSPORT = process.env.LOG_TRANSPORT ?? 'file';
const useConsole = LOG_TRANSPORT === 'console';
export const logDestination =
process.env.SUPPRESS_LOGS === 'true' ? nullDestination : pino.destination();
// Since PM2 captures stdout and writes to the log file, we should not colorize stdout
// to avoid ANSI escape codes in the log file
process.env.SUPPRESS_LOGS === 'true'
? nullDestination
: useConsole
? pino.destination(1) // stdout
: pino.destination({ dest: PATHS_LOGS_FILE, mkdir: true });
const stream = SUPPRESS_LOGS
? nullDestination
: LOG_TYPE === 'pretty'
? pretty({
singleLine: true,
hideObject: false,
colorize: false, // No colors since PM2 writes stdout to file
colorizeObjects: false,
colorize: useConsole, // Enable colors when outputting to console
colorizeObjects: useConsole,
levelFirst: false,
ignore: 'hostname,pid',
destination: logDestination,
@@ -34,10 +40,10 @@ const stream = SUPPRESS_LOGS
customPrettifiers: {
time: (timestamp: string | object) => `[${timestamp}`,
level: (_logLevel: string | object, _key: string, log: any, extras: any) => {
// Use label instead of labelColorized for non-colored output
const { label } = extras;
const { label, labelColorized } = extras;
const context = log.context || log.logger || 'app';
return `${label} ${context}]`;
// Use colorized label when outputting to console
return `${useConsole ? labelColorized : label} ${context}]`;
},
},
messageFormat: (log: any, messageKey: string) => {

View File

@@ -2,7 +2,7 @@ import { AppError } from '@app/core/errors/app-error.js';
import { getters } from '@app/store/index.js';
interface DockerError extends NodeJS.ErrnoException {
address: string;
address?: string;
}
/**

View File

@@ -0,0 +1,19 @@
import { getters } from '@app/store/index.js';
/**
* Returns the LAN IPv4 address reported by emhttp, if available.
*/
export function getLanIp(): string {
const emhttp = getters.emhttp();
const lanFromNetworks = emhttp?.networks?.[0]?.ipaddr?.[0];
if (lanFromNetworks) {
return lanFromNetworks;
}
const lanFromNginx = emhttp?.nginx?.lanIp;
if (lanFromNginx) {
return lanFromNginx;
}
return '';
}

View File

@@ -111,5 +111,10 @@ export const PATHS_CONFIG_MODULES =
export const PATHS_LOCAL_SESSION_FILE =
process.env.PATHS_LOCAL_SESSION_FILE ?? '/var/run/unraid-api/local-session';
export const PATHS_DOCKER_TEMPLATES = process.env.PATHS_DOCKER_TEMPLATES?.split(',') ?? [
'/boot/config/plugins/dockerMan/templates-user',
'/boot/config/plugins/dockerMan/templates',
];
/** feature flag for the upcoming docker release */
export const ENABLE_NEXT_DOCKER_RELEASE = process.env.ENABLE_NEXT_DOCKER_RELEASE === 'true';

View File

@@ -32,11 +32,11 @@ let server: NestFastifyApplication<RawServerDefault> | null = null;
// PM2 listen_timeout is 15 seconds (ecosystem.config.json)
// We use 13 seconds as our total budget to ensure our timeout triggers before PM2 kills us
const TOTAL_STARTUP_BUDGET_MS = 13_000;
const TOTAL_STARTUP_BUDGET_MS = 30_000;
// Reserve time for the NestJS bootstrap (the most critical and time-consuming operation)
const BOOTSTRAP_RESERVED_MS = 8_000;
const BOOTSTRAP_RESERVED_MS = 20_000;
// Maximum time for any single pre-bootstrap operation
const MAX_OPERATION_TIMEOUT_MS = 2_000;
const MAX_OPERATION_TIMEOUT_MS = 5_000;
const unlinkUnixPort = () => {
if (isNaN(parseInt(PORT, 10))) {

View File

@@ -20,6 +20,7 @@ const initialState = {
process.env.PATHS_UNRAID_DATA ?? ('/boot/config/plugins/dynamix.my.servers/data/' as const)
),
'docker-autostart': '/var/lib/docker/unraid-autostart' as const,
'docker-userprefs': '/boot/config/plugins/dockerMan/userprefs.cfg' as const,
'docker-socket': '/var/run/docker.sock' as const,
'rclone-socket': resolvePath(process.env.PATHS_RCLONE_SOCKET ?? ('/var/run/rclone.socket' as const)),
'parity-checks': resolvePath(

View File

@@ -6,102 +6,60 @@ import { AuthZGuard } from 'nest-authz';
import request from 'supertest';
import { afterAll, beforeAll, describe, expect, it, vi } from 'vitest';
import { loadDynamixConfig, store } from '@app/store/index.js';
import { loadStateFiles } from '@app/store/modules/emhttp.js';
import { AppModule } from '@app/unraid-api/app/app.module.js';
import { AuthService } from '@app/unraid-api/auth/auth.service.js';
import { AuthenticationGuard } from '@app/unraid-api/auth/authentication.guard.js';
import { DockerService } from '@app/unraid-api/graph/resolvers/docker/docker.service.js';
// Mock external system boundaries that we can't control in tests
vi.mock('dockerode', () => {
return {
default: vi.fn().mockImplementation(() => ({
listContainers: vi.fn().mockResolvedValue([
{
Id: 'test-container-1',
Names: ['/test-container'],
State: 'running',
Status: 'Up 5 minutes',
Image: 'test:latest',
Command: 'node server.js',
Created: Date.now() / 1000,
Ports: [
{
IP: '0.0.0.0',
PrivatePort: 3000,
PublicPort: 3000,
Type: 'tcp',
},
],
Labels: {},
HostConfig: {
NetworkMode: 'bridge',
},
NetworkSettings: {
Networks: {},
},
Mounts: [],
// Mock the store before importing it
vi.mock('@app/store/index.js', () => ({
store: {
dispatch: vi.fn().mockResolvedValue(undefined),
subscribe: vi.fn().mockImplementation(() => vi.fn()),
getState: vi.fn().mockReturnValue({
emhttp: {
var: {
csrfToken: 'test-csrf-token',
},
]),
getContainer: vi.fn().mockImplementation((id) => ({
inspect: vi.fn().mockResolvedValue({
Id: id,
Name: '/test-container',
State: { Running: true },
Config: { Image: 'test:latest' },
}),
})),
listImages: vi.fn().mockResolvedValue([]),
listNetworks: vi.fn().mockResolvedValue([]),
listVolumes: vi.fn().mockResolvedValue({ Volumes: [] }),
})),
};
});
// Mock external command execution
vi.mock('execa', () => ({
execa: vi.fn().mockImplementation((cmd) => {
if (cmd === 'whoami') {
return Promise.resolve({ stdout: 'testuser' });
}
return Promise.resolve({ stdout: 'mocked output' });
}),
},
docker: {
containers: [],
autostart: [],
},
}),
unsubscribe: vi.fn(),
},
getters: {
emhttp: vi.fn().mockReturnValue({
var: {
csrfToken: 'test-csrf-token',
},
}),
docker: vi.fn().mockReturnValue({
containers: [],
autostart: [],
}),
paths: vi.fn().mockReturnValue({
'docker-autostart': '/tmp/docker-autostart',
'docker-socket': '/var/run/docker.sock',
'var-run': '/var/run',
'auth-keys': '/tmp/auth-keys',
activationBase: '/tmp/activation',
'dynamix-config': ['/tmp/dynamix-config', '/tmp/dynamix-config'],
identConfig: '/tmp/ident.cfg',
}),
dynamix: vi.fn().mockReturnValue({
notify: {
path: '/tmp/notifications',
},
}),
},
loadDynamixConfig: vi.fn(),
loadStateFiles: vi.fn().mockResolvedValue(undefined),
}));
// Mock child_process for services that spawn processes
vi.mock('node:child_process', () => ({
spawn: vi.fn(() => ({
on: vi.fn(),
kill: vi.fn(),
stdout: { on: vi.fn() },
stderr: { on: vi.fn() },
})),
}));
// Mock file system operations that would fail in test environment
vi.mock('node:fs/promises', async (importOriginal) => {
const actual = await importOriginal<typeof import('fs/promises')>();
return {
...actual,
readFile: vi.fn().mockResolvedValue(''),
writeFile: vi.fn().mockResolvedValue(undefined),
mkdir: vi.fn().mockResolvedValue(undefined),
access: vi.fn().mockResolvedValue(undefined),
stat: vi.fn().mockResolvedValue({ isFile: () => true }),
readdir: vi.fn().mockResolvedValue([]),
rename: vi.fn().mockResolvedValue(undefined),
unlink: vi.fn().mockResolvedValue(undefined),
};
});
// Mock fs module for synchronous operations
vi.mock('node:fs', () => ({
existsSync: vi.fn().mockReturnValue(false),
readFileSync: vi.fn().mockReturnValue(''),
writeFileSync: vi.fn(),
mkdirSync: vi.fn(),
readdirSync: vi.fn().mockReturnValue([]),
// Mock fs-extra for directory operations
vi.mock('fs-extra', () => ({
ensureDirSync: vi.fn().mockReturnValue(undefined),
}));
describe('AppModule Integration Tests', () => {
@@ -109,14 +67,6 @@ describe('AppModule Integration Tests', () => {
let moduleRef: TestingModule;
beforeAll(async () => {
// Initialize the dynamix config and state files before creating the module
await store.dispatch(loadStateFiles());
loadDynamixConfig();
// Debug: Log the CSRF token from the store
const { getters } = await import('@app/store/index.js');
console.log('CSRF Token from store:', getters.emhttp().var.csrfToken);
moduleRef = await Test.createTestingModule({
imports: [AppModule],
})
@@ -149,14 +99,6 @@ describe('AppModule Integration Tests', () => {
roles: ['admin'],
}),
})
// Override Redis client
.overrideProvider('REDIS_CLIENT')
.useValue({
get: vi.fn(),
set: vi.fn(),
del: vi.fn(),
connect: vi.fn(),
})
.compile();
app = moduleRef.createNestApplication<NestFastifyApplication>(new FastifyAdapter());
@@ -177,9 +119,9 @@ describe('AppModule Integration Tests', () => {
});
it('should resolve core services', () => {
const dockerService = moduleRef.get(DockerService);
const authService = moduleRef.get(AuthService);
expect(dockerService).toBeDefined();
expect(authService).toBeDefined();
});
});
@@ -238,18 +180,12 @@ describe('AppModule Integration Tests', () => {
});
describe('Service Integration', () => {
it('should have working service-to-service communication', async () => {
const dockerService = moduleRef.get(DockerService);
// Test that the service can be called and returns expected data structure
const containers = await dockerService.getContainers();
expect(containers).toBeInstanceOf(Array);
// The containers might be empty or cached, just verify structure
if (containers.length > 0) {
expect(containers[0]).toHaveProperty('id');
expect(containers[0]).toHaveProperty('names');
}
it('should have working service-to-service communication', () => {
// Test that the module can resolve its services without errors
// This validates that dependency injection is working correctly
const authService = moduleRef.get(AuthService);
expect(authService).toBeDefined();
expect(typeof authService.validateCookiesWithCsrfToken).toBe('function');
});
});
});

View File

@@ -183,6 +183,11 @@ export class ApiKeyService implements OnModuleInit {
async loadAllFromDisk(): Promise<ApiKey[]> {
const files = await readdir(this.basePath).catch((error) => {
if (error.code === 'ENOENT') {
// Directory doesn't exist, which means no API keys have been created yet
this.logger.error(`API key directory does not exist: ${this.basePath}`);
return [];
}
this.logger.error(`Failed to read API key directory: ${error}`);
throw new Error('Failed to list API keys');
});

View File

@@ -525,6 +525,7 @@ export enum ContainerPortType {
export enum ContainerState {
EXITED = 'EXITED',
PAUSED = 'PAUSED',
RUNNING = 'RUNNING'
}
@@ -678,11 +679,20 @@ export enum DiskSmartStatus {
export type Docker = Node & {
__typename?: 'Docker';
container?: Maybe<DockerContainer>;
containerUpdateStatuses: Array<ExplicitStatusItem>;
containers: Array<DockerContainer>;
id: Scalars['PrefixedID']['output'];
/** Access container logs. Requires specifying a target container id through resolver arguments. */
logs: DockerContainerLogs;
networks: Array<DockerNetwork>;
organizer: ResolvedOrganizerV1;
portConflicts: DockerPortConflicts;
};
export type DockerContainerArgs = {
id: Scalars['PrefixedID']['input'];
};
@@ -691,38 +701,169 @@ export type DockerContainersArgs = {
};
export type DockerLogsArgs = {
id: Scalars['PrefixedID']['input'];
since?: InputMaybe<Scalars['DateTime']['input']>;
tail?: InputMaybe<Scalars['Int']['input']>;
};
export type DockerNetworksArgs = {
skipCache?: Scalars['Boolean']['input'];
};
export type DockerOrganizerArgs = {
skipCache?: Scalars['Boolean']['input'];
};
export type DockerPortConflictsArgs = {
skipCache?: Scalars['Boolean']['input'];
};
export type DockerAutostartEntryInput = {
/** Whether the container should auto-start */
autoStart: Scalars['Boolean']['input'];
/** Docker container identifier */
id: Scalars['PrefixedID']['input'];
/** Number of seconds to wait after starting the container */
wait?: InputMaybe<Scalars['Int']['input']>;
};
export type DockerContainer = Node & {
__typename?: 'DockerContainer';
autoStart: Scalars['Boolean']['output'];
/** Zero-based order in the auto-start list */
autoStartOrder?: Maybe<Scalars['Int']['output']>;
/** Wait time in seconds applied after start */
autoStartWait?: Maybe<Scalars['Int']['output']>;
command: Scalars['String']['output'];
created: Scalars['Int']['output'];
hostConfig?: Maybe<ContainerHostConfig>;
/** Icon URL */
iconUrl?: Maybe<Scalars['String']['output']>;
id: Scalars['PrefixedID']['output'];
image: Scalars['String']['output'];
imageId: Scalars['String']['output'];
/** Whether the container is orphaned (no template found) */
isOrphaned: Scalars['Boolean']['output'];
isRebuildReady?: Maybe<Scalars['Boolean']['output']>;
isUpdateAvailable?: Maybe<Scalars['Boolean']['output']>;
labels?: Maybe<Scalars['JSON']['output']>;
/** List of LAN-accessible host:port values */
lanIpPorts?: Maybe<Array<Scalars['String']['output']>>;
mounts?: Maybe<Array<Scalars['JSON']['output']>>;
names: Array<Scalars['String']['output']>;
networkSettings?: Maybe<Scalars['JSON']['output']>;
ports: Array<ContainerPort>;
/** Project/Product homepage URL */
projectUrl?: Maybe<Scalars['String']['output']>;
/** Registry/Docker Hub URL */
registryUrl?: Maybe<Scalars['String']['output']>;
/** Shell to use for console access (from template) */
shell?: Maybe<Scalars['String']['output']>;
/** Size of container logs (in bytes) */
sizeLog?: Maybe<Scalars['BigInt']['output']>;
/** Total size of all files in the container (in bytes) */
sizeRootFs?: Maybe<Scalars['BigInt']['output']>;
/** Size of writable layer (in bytes) */
sizeRw?: Maybe<Scalars['BigInt']['output']>;
state: ContainerState;
status: Scalars['String']['output'];
/** Support page/thread URL */
supportUrl?: Maybe<Scalars['String']['output']>;
/** Whether Tailscale is enabled for this container */
tailscaleEnabled: Scalars['Boolean']['output'];
/** Tailscale status for this container (fetched via docker exec) */
tailscaleStatus?: Maybe<TailscaleStatus>;
templatePath?: Maybe<Scalars['String']['output']>;
/** Port mappings from template (used when container is not running) */
templatePorts?: Maybe<Array<ContainerPort>>;
/** Resolved WebUI URL from template */
webUiUrl?: Maybe<Scalars['String']['output']>;
};
export type DockerContainerTailscaleStatusArgs = {
forceRefresh?: InputMaybe<Scalars['Boolean']['input']>;
};
export type DockerContainerLogLine = {
__typename?: 'DockerContainerLogLine';
message: Scalars['String']['output'];
timestamp: Scalars['DateTime']['output'];
};
export type DockerContainerLogs = {
__typename?: 'DockerContainerLogs';
containerId: Scalars['PrefixedID']['output'];
/** Cursor that can be passed back through the since argument to continue streaming logs. */
cursor?: Maybe<Scalars['DateTime']['output']>;
lines: Array<DockerContainerLogLine>;
};
export type DockerContainerPortConflict = {
__typename?: 'DockerContainerPortConflict';
containers: Array<DockerPortConflictContainer>;
privatePort: Scalars['Port']['output'];
type: ContainerPortType;
};
export type DockerContainerStats = {
__typename?: 'DockerContainerStats';
/** Block I/O String (e.g. 100MB / 1GB) */
blockIO: Scalars['String']['output'];
/** CPU Usage Percentage */
cpuPercent: Scalars['Float']['output'];
id: Scalars['PrefixedID']['output'];
/** Memory Usage Percentage */
memPercent: Scalars['Float']['output'];
/** Memory Usage String (e.g. 100MB / 1GB) */
memUsage: Scalars['String']['output'];
/** Network I/O String (e.g. 100MB / 1GB) */
netIO: Scalars['String']['output'];
};
export type DockerLanPortConflict = {
__typename?: 'DockerLanPortConflict';
containers: Array<DockerPortConflictContainer>;
lanIpPort: Scalars['String']['output'];
publicPort?: Maybe<Scalars['Port']['output']>;
type: ContainerPortType;
};
export type DockerMutations = {
__typename?: 'DockerMutations';
/** Pause (Suspend) a container */
pause: DockerContainer;
/** Remove a container */
removeContainer: Scalars['Boolean']['output'];
/** Start a container */
start: DockerContainer;
/** Stop a container */
stop: DockerContainer;
/** Unpause (Resume) a container */
unpause: DockerContainer;
/** Update all containers that have available updates */
updateAllContainers: Array<DockerContainer>;
/** Update auto-start configuration for Docker containers */
updateAutostartConfiguration: Scalars['Boolean']['output'];
/** Update a container to the latest image */
updateContainer: DockerContainer;
/** Update multiple containers to the latest images */
updateContainers: Array<DockerContainer>;
};
export type DockerMutationsPauseArgs = {
id: Scalars['PrefixedID']['input'];
};
export type DockerMutationsRemoveContainerArgs = {
id: Scalars['PrefixedID']['input'];
withImage?: InputMaybe<Scalars['Boolean']['input']>;
};
@@ -735,6 +876,27 @@ export type DockerMutationsStopArgs = {
id: Scalars['PrefixedID']['input'];
};
export type DockerMutationsUnpauseArgs = {
id: Scalars['PrefixedID']['input'];
};
export type DockerMutationsUpdateAutostartConfigurationArgs = {
entries: Array<DockerAutostartEntryInput>;
persistUserPreferences?: InputMaybe<Scalars['Boolean']['input']>;
};
export type DockerMutationsUpdateContainerArgs = {
id: Scalars['PrefixedID']['input'];
};
export type DockerMutationsUpdateContainersArgs = {
ids: Array<Scalars['PrefixedID']['input']>;
};
export type DockerNetwork = Node & {
__typename?: 'DockerNetwork';
attachable: Scalars['Boolean']['output'];
@@ -754,6 +916,26 @@ export type DockerNetwork = Node & {
scope: Scalars['String']['output'];
};
export type DockerPortConflictContainer = {
__typename?: 'DockerPortConflictContainer';
id: Scalars['PrefixedID']['output'];
name: Scalars['String']['output'];
};
export type DockerPortConflicts = {
__typename?: 'DockerPortConflicts';
containerPorts: Array<DockerContainerPortConflict>;
lanPorts: Array<DockerLanPortConflict>;
};
export type DockerTemplateSyncResult = {
__typename?: 'DockerTemplateSyncResult';
errors: Array<Scalars['String']['output']>;
matched: Scalars['Int']['output'];
scanned: Scalars['Int']['output'];
skipped: Scalars['Int']['output'];
};
export type DynamicRemoteAccessStatus = {
__typename?: 'DynamicRemoteAccessStatus';
/** The type of dynamic remote access that is enabled */
@@ -799,6 +981,20 @@ export type FlashBackupStatus = {
status: Scalars['String']['output'];
};
export type FlatOrganizerEntry = {
__typename?: 'FlatOrganizerEntry';
childrenIds: Array<Scalars['String']['output']>;
depth: Scalars['Float']['output'];
hasChildren: Scalars['Boolean']['output'];
id: Scalars['String']['output'];
meta?: Maybe<DockerContainer>;
name: Scalars['String']['output'];
parentId?: Maybe<Scalars['String']['output']>;
path: Array<Scalars['String']['output']>;
position: Scalars['Float']['output'];
type: Scalars['String']['output'];
};
export type FormSchema = {
/** The data schema for the form */
dataSchema: Scalars['JSON']['output'];
@@ -1223,6 +1419,7 @@ export type Mutation = {
connectSignIn: Scalars['Boolean']['output'];
connectSignOut: Scalars['Boolean']['output'];
createDockerFolder: ResolvedOrganizerV1;
createDockerFolderWithItems: ResolvedOrganizerV1;
/** Creates a new notification record */
createNotification: Notification;
/** Deletes all archived notifications on server. */
@@ -1234,6 +1431,9 @@ export type Mutation = {
/** Initiates a flash drive backup using a configured remote. */
initiateFlashBackup: FlashBackupStatus;
moveDockerEntriesToFolder: ResolvedOrganizerV1;
moveDockerItemsToPosition: ResolvedOrganizerV1;
/** Creates a notification if an equivalent unread notification does not already exist. */
notifyIfUnique?: Maybe<Notification>;
parityCheck: ParityCheckMutations;
rclone: RCloneMutations;
/** Reads each notification to recompute & update the overview. */
@@ -1241,13 +1441,18 @@ export type Mutation = {
refreshDockerDigests: Scalars['Boolean']['output'];
/** Remove one or more plugins from the API. Returns false if restart was triggered automatically, true if manual restart is required. */
removePlugin: Scalars['Boolean']['output'];
renameDockerFolder: ResolvedOrganizerV1;
/** Reset Docker template mappings to defaults. Use this to recover from corrupted state. */
resetDockerTemplateMappings: Scalars['Boolean']['output'];
setDockerFolderChildren: ResolvedOrganizerV1;
setupRemoteAccess: Scalars['Boolean']['output'];
syncDockerTemplatePaths: DockerTemplateSyncResult;
unarchiveAll: NotificationOverview;
unarchiveNotifications: NotificationOverview;
/** Marks a notification as unread. */
unreadNotification: Notification;
updateApiSettings: ConnectSettingsValues;
updateDockerViewPreferences: ResolvedOrganizerV1;
updateSettings: UpdateSettingsResponse;
vm: VmMutations;
};
@@ -1290,6 +1495,14 @@ export type MutationCreateDockerFolderArgs = {
};
export type MutationCreateDockerFolderWithItemsArgs = {
name: Scalars['String']['input'];
parentId?: InputMaybe<Scalars['String']['input']>;
position?: InputMaybe<Scalars['Float']['input']>;
sourceEntryIds?: InputMaybe<Array<Scalars['String']['input']>>;
};
export type MutationCreateNotificationArgs = {
input: NotificationData;
};
@@ -1322,11 +1535,29 @@ export type MutationMoveDockerEntriesToFolderArgs = {
};
export type MutationMoveDockerItemsToPositionArgs = {
destinationFolderId: Scalars['String']['input'];
position: Scalars['Float']['input'];
sourceEntryIds: Array<Scalars['String']['input']>;
};
export type MutationNotifyIfUniqueArgs = {
input: NotificationData;
};
export type MutationRemovePluginArgs = {
input: PluginManagementInput;
};
export type MutationRenameDockerFolderArgs = {
folderId: Scalars['String']['input'];
newName: Scalars['String']['input'];
};
export type MutationSetDockerFolderChildrenArgs = {
childrenIds: Array<Scalars['String']['input']>;
folderId?: InputMaybe<Scalars['String']['input']>;
@@ -1358,6 +1589,12 @@ export type MutationUpdateApiSettingsArgs = {
};
export type MutationUpdateDockerViewPreferencesArgs = {
prefs: Scalars['JSON']['input'];
viewId?: InputMaybe<Scalars['String']['input']>;
};
export type MutationUpdateSettingsArgs = {
input: Scalars['JSON']['input'];
};
@@ -1433,6 +1670,8 @@ export type Notifications = Node & {
list: Array<Notification>;
/** A cached overview of the notifications in the system & their severity. */
overview: NotificationOverview;
/** Deduplicated list of unread warning and alert notifications, sorted latest first. */
warningsAndAlerts: Array<Notification>;
};
@@ -1498,22 +1737,6 @@ export type OidcSessionValidation = {
valid: Scalars['Boolean']['output'];
};
export type OrganizerContainerResource = {
__typename?: 'OrganizerContainerResource';
id: Scalars['String']['output'];
meta?: Maybe<DockerContainer>;
name: Scalars['String']['output'];
type: Scalars['String']['output'];
};
export type OrganizerResource = {
__typename?: 'OrganizerResource';
id: Scalars['String']['output'];
meta?: Maybe<Scalars['JSON']['output']>;
name: Scalars['String']['output'];
type: Scalars['String']['output'];
};
export type Owner = {
__typename?: 'Owner';
avatar: Scalars['String']['output'];
@@ -1882,16 +2105,6 @@ export type RemoveRoleFromApiKeyInput = {
role: Role;
};
export type ResolvedOrganizerEntry = OrganizerContainerResource | OrganizerResource | ResolvedOrganizerFolder;
export type ResolvedOrganizerFolder = {
__typename?: 'ResolvedOrganizerFolder';
children: Array<ResolvedOrganizerEntry>;
id: Scalars['String']['output'];
name: Scalars['String']['output'];
type: Scalars['String']['output'];
};
export type ResolvedOrganizerV1 = {
__typename?: 'ResolvedOrganizerV1';
version: Scalars['Float']['output'];
@@ -1900,10 +2113,11 @@ export type ResolvedOrganizerV1 = {
export type ResolvedOrganizerView = {
__typename?: 'ResolvedOrganizerView';
flatEntries: Array<FlatOrganizerEntry>;
id: Scalars['String']['output'];
name: Scalars['String']['output'];
prefs?: Maybe<Scalars['JSON']['output']>;
root: ResolvedOrganizerEntry;
rootId: Scalars['String']['output'];
};
/** Available resources for permissions */
@@ -2046,9 +2260,11 @@ export type SsoSettings = Node & {
export type Subscription = {
__typename?: 'Subscription';
arraySubscription: UnraidArray;
dockerContainerStats: DockerContainerStats;
logFile: LogFileContent;
notificationAdded: Notification;
notificationsOverview: NotificationOverview;
notificationsWarningsAndAlerts: Array<Notification>;
ownerSubscription: Owner;
parityHistorySubscription: ParityCheck;
serversSubscription: Server;
@@ -2062,6 +2278,56 @@ export type SubscriptionLogFileArgs = {
path: Scalars['String']['input'];
};
/** Tailscale exit node connection status */
export type TailscaleExitNodeStatus = {
__typename?: 'TailscaleExitNodeStatus';
/** Whether the exit node is online */
online: Scalars['Boolean']['output'];
/** Tailscale IPs of the exit node */
tailscaleIps?: Maybe<Array<Scalars['String']['output']>>;
};
/** Tailscale status for a Docker container */
export type TailscaleStatus = {
__typename?: 'TailscaleStatus';
/** Authentication URL if Tailscale needs login */
authUrl?: Maybe<Scalars['String']['output']>;
/** Tailscale backend state (Running, NeedsLogin, Stopped, etc.) */
backendState?: Maybe<Scalars['String']['output']>;
/** Actual Tailscale DNS name */
dnsName?: Maybe<Scalars['String']['output']>;
/** Status of the connected exit node (if using one) */
exitNodeStatus?: Maybe<TailscaleExitNodeStatus>;
/** Configured Tailscale hostname */
hostname?: Maybe<Scalars['String']['output']>;
/** Whether this container is an exit node */
isExitNode: Scalars['Boolean']['output'];
/** Whether the Tailscale key has expired */
keyExpired: Scalars['Boolean']['output'];
/** Tailscale key expiry date */
keyExpiry?: Maybe<Scalars['DateTime']['output']>;
/** Days until key expires */
keyExpiryDays?: Maybe<Scalars['Int']['output']>;
/** Latest available Tailscale version */
latestVersion?: Maybe<Scalars['String']['output']>;
/** Whether Tailscale is online in the container */
online: Scalars['Boolean']['output'];
/** Advertised subnet routes */
primaryRoutes?: Maybe<Array<Scalars['String']['output']>>;
/** DERP relay code */
relay?: Maybe<Scalars['String']['output']>;
/** DERP relay region name */
relayName?: Maybe<Scalars['String']['output']>;
/** Tailscale IPv4 and IPv6 addresses */
tailscaleIps?: Maybe<Array<Scalars['String']['output']>>;
/** Whether a Tailscale update is available */
updateAvailable: Scalars['Boolean']['output'];
/** Current Tailscale version */
version?: Maybe<Scalars['String']['output']>;
/** Tailscale Serve/Funnel WebUI URL */
webUiUrl?: Maybe<Scalars['String']['output']>;
};
/** Temperature unit */
export enum Temperature {
CELSIUS = 'CELSIUS',

View File

@@ -1,9 +1,11 @@
import { Module } from '@nestjs/common';
import { CustomizationMutationsResolver } from '@app/unraid-api/graph/resolvers/customization/customization.mutations.resolver.js';
import { CustomizationResolver } from '@app/unraid-api/graph/resolvers/customization/customization.resolver.js';
import { CustomizationService } from '@app/unraid-api/graph/resolvers/customization/customization.service.js';
@Module({
providers: [CustomizationService, CustomizationResolver],
providers: [CustomizationService, CustomizationResolver, CustomizationMutationsResolver],
exports: [CustomizationService],
})
export class CustomizationModule {}

View File

@@ -0,0 +1,25 @@
import { Args, ResolveField, Resolver } from '@nestjs/graphql';
import { AuthAction, Resource } from '@unraid/shared/graphql.model.js';
import { UsePermissions } from '@unraid/shared/use-permissions.directive.js';
import { CustomizationService } from '@app/unraid-api/graph/resolvers/customization/customization.service.js';
import { Theme, ThemeName } from '@app/unraid-api/graph/resolvers/customization/theme.model.js';
import { CustomizationMutations } from '@app/unraid-api/graph/resolvers/mutation/mutation.model.js';
@Resolver(() => CustomizationMutations)
export class CustomizationMutationsResolver {
constructor(private readonly customizationService: CustomizationService) {}
@ResolveField(() => Theme, { description: 'Update the UI theme (writes dynamix.cfg)' })
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.CUSTOMIZATIONS,
})
async setTheme(
@Args('theme', { type: () => ThemeName, description: 'Theme to apply' })
theme: ThemeName
): Promise<Theme> {
return this.customizationService.setTheme(theme);
}
}

View File

@@ -9,7 +9,9 @@ import * as ini from 'ini';
import { emcmd } from '@app/core/utils/clients/emcmd.js';
import { fileExists } from '@app/core/utils/files/file-exists.js';
import { loadDynamixConfigFromDiskSync } from '@app/store/actions/load-dynamix-config-file.js';
import { getters, store } from '@app/store/index.js';
import { updateDynamixConfig } from '@app/store/modules/dynamix.js';
import {
ActivationCode,
PublicPartnerInfo,
@@ -466,4 +468,16 @@ export class CustomizationService implements OnModuleInit {
showHeaderDescription: descriptionShow === 'yes',
};
}
public async setTheme(theme: ThemeName): Promise<Theme> {
this.logger.log(`Updating theme to ${theme}`);
await this.updateCfgFile(this.configFile, 'display', { theme });
// Refresh in-memory store so subsequent reads get the new theme without a restart
const paths = getters.paths();
const updatedConfig = loadDynamixConfigFromDiskSync(paths['dynamix-config']);
store.dispatch(updateDynamixConfig(updatedConfig));
return this.getTheme();
}
}

View File

@@ -7,7 +7,7 @@ import { DockerConfigService } from '@app/unraid-api/graph/resolvers/docker/dock
import { DockerManifestService } from '@app/unraid-api/graph/resolvers/docker/docker-manifest.service.js';
@Injectable()
export class ContainerStatusJob implements OnApplicationBootstrap {
export class ContainerStatusJob {
private readonly logger = new Logger(ContainerStatusJob.name);
constructor(
private readonly dockerManifestService: DockerManifestService,
@@ -17,8 +17,10 @@ export class ContainerStatusJob implements OnApplicationBootstrap {
/**
* Initialize cron job for refreshing the update status for all containers on a user-configurable schedule.
*
* Disabled for now to avoid duplication of the webgui's update notifier job (under Notification Settings).
*/
onApplicationBootstrap() {
_disabled_onApplicationBootstrap() {
if (!this.dockerConfigService.enabled()) return;
const cronExpression = this.dockerConfigService.getConfig().updateCheckCronSchedule;
const cronJob = CronJob.from({

View File

@@ -0,0 +1,144 @@
import { Test, TestingModule } from '@nestjs/testing';
import { beforeEach, describe, expect, it, vi } from 'vitest';
import {
AutoStartEntry,
DockerAutostartService,
} from '@app/unraid-api/graph/resolvers/docker/docker-autostart.service.js';
import { DockerContainer } from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
// Mock store getters
const mockPaths = {
'docker-autostart': '/path/to/docker-autostart',
'docker-userprefs': '/path/to/docker-userprefs',
};
vi.mock('@app/store/index.js', () => ({
getters: {
paths: () => mockPaths,
},
}));
// Mock fs/promises
const { readFileMock, writeFileMock, unlinkMock } = vi.hoisted(() => ({
readFileMock: vi.fn().mockResolvedValue(''),
writeFileMock: vi.fn().mockResolvedValue(undefined),
unlinkMock: vi.fn().mockResolvedValue(undefined),
}));
vi.mock('fs/promises', () => ({
readFile: readFileMock,
writeFile: writeFileMock,
unlink: unlinkMock,
}));
describe('DockerAutostartService', () => {
let service: DockerAutostartService;
beforeEach(async () => {
readFileMock.mockReset();
writeFileMock.mockReset();
unlinkMock.mockReset();
readFileMock.mockResolvedValue('');
const module: TestingModule = await Test.createTestingModule({
providers: [DockerAutostartService],
}).compile();
service = module.get<DockerAutostartService>(DockerAutostartService);
});
it('should be defined', () => {
expect(service).toBeDefined();
});
it('should parse autostart entries correctly', () => {
const content = 'container1 10\ncontainer2\ncontainer3 0';
const entries = service.parseAutoStartEntries(content);
expect(entries).toHaveLength(3);
expect(entries[0]).toEqual({ name: 'container1', wait: 10, order: 0 });
expect(entries[1]).toEqual({ name: 'container2', wait: 0, order: 1 });
expect(entries[2]).toEqual({ name: 'container3', wait: 0, order: 2 });
});
it('should refresh autostart entries', async () => {
readFileMock.mockResolvedValue('alpha 5');
await service.refreshAutoStartEntries();
const entry = service.getAutoStartEntry('alpha');
expect(entry).toBeDefined();
expect(entry?.wait).toBe(5);
});
describe('updateAutostartConfiguration', () => {
const mockContainers = [
{ id: 'c1', names: ['/alpha'] },
{ id: 'c2', names: ['/beta'] },
] as DockerContainer[];
it('should update auto-start configuration and persist waits', async () => {
await service.updateAutostartConfiguration(
[
{ id: 'c1', autoStart: true, wait: 15 },
{ id: 'c2', autoStart: true, wait: 0 },
],
mockContainers,
{ persistUserPreferences: true }
);
expect(writeFileMock).toHaveBeenCalledWith(
mockPaths['docker-autostart'],
'alpha 15\nbeta\n',
'utf8'
);
expect(writeFileMock).toHaveBeenCalledWith(
mockPaths['docker-userprefs'],
'0="alpha"\n1="beta"\n',
'utf8'
);
});
it('should skip updating user preferences when persist flag is false', async () => {
await service.updateAutostartConfiguration(
[{ id: 'c1', autoStart: true, wait: 5 }],
mockContainers
);
expect(writeFileMock).toHaveBeenCalledWith(
mockPaths['docker-autostart'],
'alpha 5\n',
'utf8'
);
expect(writeFileMock).not.toHaveBeenCalledWith(
mockPaths['docker-userprefs'],
expect.any(String),
expect.any(String)
);
});
it('should remove auto-start file when no containers are configured', async () => {
await service.updateAutostartConfiguration(
[{ id: 'c1', autoStart: false, wait: 30 }],
mockContainers,
{ persistUserPreferences: true }
);
expect(unlinkMock).toHaveBeenCalledWith(mockPaths['docker-autostart']);
expect(writeFileMock).toHaveBeenCalledWith(
mockPaths['docker-userprefs'],
'0="alpha"\n',
'utf8'
);
});
});
it('should sanitize autostart wait values', () => {
expect(service.sanitizeAutoStartWait(null)).toBe(0);
expect(service.sanitizeAutoStartWait(undefined)).toBe(0);
expect(service.sanitizeAutoStartWait(10)).toBe(10);
expect(service.sanitizeAutoStartWait(-5)).toBe(0);
expect(service.sanitizeAutoStartWait(NaN)).toBe(0);
});
});

View File

@@ -0,0 +1,175 @@
import { Injectable, Logger } from '@nestjs/common';
import { readFile, unlink, writeFile } from 'fs/promises';
import Docker from 'dockerode';
import { getters } from '@app/store/index.js';
import {
DockerAutostartEntryInput,
DockerContainer,
} from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
export interface AutoStartEntry {
name: string;
wait: number;
order: number;
}
@Injectable()
export class DockerAutostartService {
private readonly logger = new Logger(DockerAutostartService.name);
private autoStartEntries: AutoStartEntry[] = [];
private autoStartEntryByName = new Map<string, AutoStartEntry>();
public getAutoStartEntry(name: string): AutoStartEntry | undefined {
return this.autoStartEntryByName.get(name);
}
public setAutoStartEntries(entries: AutoStartEntry[]) {
this.autoStartEntries = entries;
this.autoStartEntryByName = new Map(entries.map((entry) => [entry.name, entry]));
}
public parseAutoStartEntries(rawContent: string): AutoStartEntry[] {
const lines = rawContent
.split('\n')
.map((line) => line.trim())
.filter((line) => line.length > 0);
const seen = new Set<string>();
const entries: AutoStartEntry[] = [];
lines.forEach((line, index) => {
const [name, waitRaw] = line.split(/\s+/);
if (!name || seen.has(name)) {
return;
}
const parsedWait = Number.parseInt(waitRaw ?? '', 10);
const wait = Number.isFinite(parsedWait) && parsedWait > 0 ? parsedWait : 0;
entries.push({
name,
wait,
order: index,
});
seen.add(name);
});
return entries;
}
public async refreshAutoStartEntries(): Promise<void> {
const autoStartPath = getters.paths()['docker-autostart'];
const raw = await readFile(autoStartPath, 'utf8')
.then((file) => file.toString())
.catch(() => '');
const entries = this.parseAutoStartEntries(raw);
this.setAutoStartEntries(entries);
}
public sanitizeAutoStartWait(wait?: number | null): number {
if (wait === null || wait === undefined) return 0;
const coerced = Number.isInteger(wait) ? wait : Number.parseInt(String(wait), 10);
if (!Number.isFinite(coerced) || coerced < 0) {
return 0;
}
return coerced;
}
public getContainerPrimaryName(container: Docker.ContainerInfo | DockerContainer): string | null {
const names =
'Names' in container ? container.Names : 'names' in container ? container.names : undefined;
const firstName = names?.[0] ?? '';
return firstName ? firstName.replace(/^\//, '') : null;
}
private buildUserPreferenceLines(
entries: DockerAutostartEntryInput[],
containerById: Map<string, DockerContainer>
): string[] {
const seenNames = new Set<string>();
const lines: string[] = [];
for (const entry of entries) {
const container = containerById.get(entry.id);
if (!container) {
continue;
}
const primaryName = this.getContainerPrimaryName(container);
if (!primaryName || seenNames.has(primaryName)) {
continue;
}
lines.push(`${lines.length}="${primaryName}"`);
seenNames.add(primaryName);
}
return lines;
}
/**
* Docker auto start file
*
* @note Doesn't exist if array is offline.
* @see https://github.com/limetech/webgui/issues/502#issue-480992547
*/
public async getAutoStarts(): Promise<string[]> {
await this.refreshAutoStartEntries();
return this.autoStartEntries.map((entry) => entry.name);
}
public async updateAutostartConfiguration(
entries: DockerAutostartEntryInput[],
containers: DockerContainer[],
options?: { persistUserPreferences?: boolean }
): Promise<void> {
const containerById = new Map(containers.map((container) => [container.id, container]));
const paths = getters.paths();
const autoStartPath = paths['docker-autostart'];
const userPrefsPath = paths['docker-userprefs'];
const persistUserPreferences = Boolean(options?.persistUserPreferences);
const lines: string[] = [];
const seenNames = new Set<string>();
for (const entry of entries) {
if (!entry.autoStart) {
continue;
}
const container = containerById.get(entry.id);
if (!container) {
continue;
}
const primaryName = this.getContainerPrimaryName(container);
if (!primaryName || seenNames.has(primaryName)) {
continue;
}
const wait = this.sanitizeAutoStartWait(entry.wait);
lines.push(wait > 0 ? `${primaryName} ${wait}` : primaryName);
seenNames.add(primaryName);
}
if (lines.length) {
await writeFile(autoStartPath, `${lines.join('\n')}\n`, 'utf8');
} else {
await unlink(autoStartPath)?.catch((error: NodeJS.ErrnoException) => {
if (error.code !== 'ENOENT') {
throw error;
}
});
}
if (persistUserPreferences) {
const userPrefsLines = this.buildUserPreferenceLines(entries, containerById);
if (userPrefsLines.length) {
await writeFile(userPrefsPath, `${userPrefsLines.join('\n')}\n`, 'utf8');
} else {
await unlink(userPrefsPath)?.catch((error: NodeJS.ErrnoException) => {
if (error.code !== 'ENOENT') {
throw error;
}
});
}
}
await this.refreshAutoStartEntries();
}
}

View File

@@ -1,7 +1,22 @@
import { Field, ObjectType } from '@nestjs/graphql';
import { IsArray, IsObject, IsOptional, IsString } from 'class-validator';
import { GraphQLJSON } from 'graphql-scalars';
@ObjectType()
export class DockerConfig {
@Field(() => String)
@IsString()
updateCheckCronSchedule!: string;
@Field(() => GraphQLJSON, { nullable: true })
@IsOptional()
@IsObject()
templateMappings?: Record<string, string | null>;
@Field(() => [String], { nullable: true })
@IsOptional()
@IsArray()
@IsString({ each: true })
skipTemplatePaths?: string[];
}

View File

@@ -31,6 +31,8 @@ export class DockerConfigService extends ConfigFilePersister<DockerConfig> {
defaultConfig(): DockerConfig {
return {
updateCheckCronSchedule: CronExpression.EVERY_DAY_AT_6AM,
templateMappings: {},
skipTemplatePaths: [],
};
}
@@ -40,6 +42,7 @@ export class DockerConfigService extends ConfigFilePersister<DockerConfig> {
if (!cronExpression.valid) {
throw new AppError(`Cron expression not supported: ${dockerConfig.updateCheckCronSchedule}`);
}
return dockerConfig;
}
}

View File

@@ -1,18 +1,31 @@
import { Logger } from '@nestjs/common';
import { Mutation, Parent, ResolveField, Resolver } from '@nestjs/graphql';
import { Args, Mutation, Parent, ResolveField, Resolver } from '@nestjs/graphql';
import { Resource } from '@unraid/shared/graphql.model.js';
import { AuthAction, UsePermissions } from '@unraid/shared/use-permissions.directive.js';
import { AppError } from '@app/core/errors/app-error.js';
import { getLanIp } from '@app/core/utils/network.js';
import { UseFeatureFlag } from '@app/unraid-api/decorators/use-feature-flag.decorator.js';
import { DockerManifestService } from '@app/unraid-api/graph/resolvers/docker/docker-manifest.service.js';
import { DockerContainer } from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { DockerTailscaleService } from '@app/unraid-api/graph/resolvers/docker/docker-tailscale.service.js';
import { DockerTemplateScannerService } from '@app/unraid-api/graph/resolvers/docker/docker-template-scanner.service.js';
import {
ContainerPort,
ContainerPortType,
ContainerState,
DockerContainer,
TailscaleStatus,
} from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
@Resolver(() => DockerContainer)
export class DockerContainerResolver {
private readonly logger = new Logger(DockerContainerResolver.name);
constructor(private readonly dockerManifestService: DockerManifestService) {}
constructor(
private readonly dockerManifestService: DockerManifestService,
private readonly dockerTemplateScannerService: DockerTemplateScannerService,
private readonly dockerTailscaleService: DockerTailscaleService
) {}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
@@ -39,6 +52,150 @@ export class DockerContainerResolver {
return this.dockerManifestService.isRebuildReady(container.hostConfig?.networkMode);
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => String, { nullable: true })
public async projectUrl(@Parent() container: DockerContainer) {
if (!container.templatePath) return null;
const details = await this.dockerTemplateScannerService.getTemplateDetails(
container.templatePath
);
return details?.project || null;
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => String, { nullable: true })
public async registryUrl(@Parent() container: DockerContainer) {
if (!container.templatePath) return null;
const details = await this.dockerTemplateScannerService.getTemplateDetails(
container.templatePath
);
return details?.registry || null;
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => String, { nullable: true })
public async supportUrl(@Parent() container: DockerContainer) {
if (!container.templatePath) return null;
const details = await this.dockerTemplateScannerService.getTemplateDetails(
container.templatePath
);
return details?.support || null;
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => String, { nullable: true })
public async iconUrl(@Parent() container: DockerContainer) {
if (container.labels?.['net.unraid.docker.icon']) {
return container.labels['net.unraid.docker.icon'];
}
if (!container.templatePath) return null;
const details = await this.dockerTemplateScannerService.getTemplateDetails(
container.templatePath
);
return details?.icon || null;
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => String, { nullable: true, description: 'Shell to use for console access' })
public async shell(@Parent() container: DockerContainer): Promise<string | null> {
if (!container.templatePath) return null;
const details = await this.dockerTemplateScannerService.getTemplateDetails(
container.templatePath
);
return details?.shell || null;
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => [ContainerPort], {
nullable: true,
description: 'Port mappings from template (used when container is not running)',
})
public async templatePorts(@Parent() container: DockerContainer): Promise<ContainerPort[] | null> {
if (!container.templatePath) return null;
const details = await this.dockerTemplateScannerService.getTemplateDetails(
container.templatePath
);
if (!details?.ports?.length) return null;
return details.ports.map((port) => ({
privatePort: port.privatePort,
publicPort: port.publicPort,
type: port.type.toUpperCase() === 'UDP' ? ContainerPortType.UDP : ContainerPortType.TCP,
}));
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => String, {
nullable: true,
description: 'Resolved WebUI URL from template',
})
public async webUiUrl(@Parent() container: DockerContainer): Promise<string | null> {
if (!container.templatePath) return null;
const details = await this.dockerTemplateScannerService.getTemplateDetails(
container.templatePath
);
if (!details?.webUi) return null;
const lanIp = getLanIp();
if (!lanIp) return null;
let resolvedUrl = details.webUi;
// Replace [IP] placeholder with LAN IP
resolvedUrl = resolvedUrl.replace(/\[IP\]/g, lanIp);
// Replace [PORT:XXXX] placeholder
const portMatch = resolvedUrl.match(/\[PORT:(\d+)\]/);
if (portMatch) {
const templatePort = parseInt(portMatch[1], 10);
let resolvedPort = templatePort;
// Check if this port is mapped to a public port
if (container.ports) {
for (const port of container.ports) {
if (port.privatePort === templatePort && port.publicPort) {
resolvedPort = port.publicPort;
break;
}
}
}
resolvedUrl = resolvedUrl.replace(/\[PORT:\d+\]/g, String(resolvedPort));
}
return resolvedUrl;
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.UPDATE_ANY,
@@ -48,4 +205,65 @@ export class DockerContainerResolver {
public async refreshDockerDigests() {
return this.dockerManifestService.refreshDigests();
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => Boolean, { description: 'Whether Tailscale is enabled for this container' })
public tailscaleEnabled(@Parent() container: DockerContainer): boolean {
// Check for Tailscale hostname label (set when hostname is explicitly configured)
if (container.labels?.['net.unraid.docker.tailscale.hostname']) {
return true;
}
// Check for Tailscale hook mount - look for the source path which is an Unraid system path
// The hook is mounted from /usr/local/share/docker/tailscale_container_hook
const mounts = container.mounts ?? [];
return mounts.some((mount: Record<string, unknown>) => {
const source = (mount?.Source ?? mount?.source) as string | undefined;
return source?.includes('tailscale_container_hook');
});
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => TailscaleStatus, {
nullable: true,
description: 'Tailscale status for this container (fetched via docker exec)',
})
public async tailscaleStatus(
@Parent() container: DockerContainer,
@Args('forceRefresh', { type: () => Boolean, nullable: true, defaultValue: false })
forceRefresh: boolean
): Promise<TailscaleStatus | null> {
// First check if Tailscale is enabled
if (!this.tailscaleEnabled(container)) {
return null;
}
const labels = container.labels ?? {};
const hostname = labels['net.unraid.docker.tailscale.hostname'];
if (container.state !== ContainerState.RUNNING) {
return {
online: false,
hostname: hostname || undefined,
isExitNode: false,
updateAvailable: false,
keyExpired: false,
};
}
const containerName = container.names[0];
if (!containerName) {
return null;
}
return this.dockerTailscaleService.getTailscaleStatus(containerName, labels, forceRefresh);
}
}

View File

@@ -1,8 +1,6 @@
import { Logger } from '@nestjs/common';
import { Test, TestingModule } from '@nestjs/testing';
import { PassThrough, Readable } from 'stream';
import { PassThrough } from 'stream';
import Docker from 'dockerode';
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
// Import pubsub for use in tests
@@ -51,6 +49,14 @@ vi.mock('@app/core/pubsub.js', () => ({
},
}));
// Mock the docker client utility - this is what the service actually uses
const mockDockerClientInstance = {
getEvents: vi.fn(),
};
vi.mock('./utils/docker-client.js', () => ({
getDockerClient: vi.fn(() => mockDockerClientInstance),
}));
// Mock DockerService
vi.mock('./docker.service.js', () => ({
DockerService: vi.fn().mockImplementation(() => ({
@@ -63,20 +69,13 @@ vi.mock('./docker.service.js', () => ({
describe('DockerEventService', () => {
let service: DockerEventService;
let dockerService: DockerService;
let mockDockerClient: Docker;
let mockEventStream: PassThrough;
let mockLogger: Logger;
let module: TestingModule;
beforeEach(async () => {
// Create a mock Docker client
mockDockerClient = {
getEvents: vi.fn(),
} as unknown as Docker;
// Create a mock Docker service *instance*
const mockDockerServiceImpl = {
getDockerClient: vi.fn().mockReturnValue(mockDockerClient),
getDockerClient: vi.fn(),
clearContainerCache: vi.fn(),
getAppInfo: vi.fn().mockResolvedValue({ info: { apps: { installed: 1, running: 1 } } }),
};
@@ -85,12 +84,7 @@ describe('DockerEventService', () => {
mockEventStream = new PassThrough();
// Set up the mock Docker client to return our mock event stream
vi.spyOn(mockDockerClient, 'getEvents').mockResolvedValue(
mockEventStream as unknown as Readable
);
// Create a mock logger
mockLogger = new Logger(DockerEventService.name) as Logger;
mockDockerClientInstance.getEvents = vi.fn().mockResolvedValue(mockEventStream);
// Use the mock implementation in the testing module
module = await Test.createTestingModule({

View File

@@ -7,6 +7,7 @@ import Docker from 'dockerode';
import { pubsub, PUBSUB_CHANNEL } from '@app/core/pubsub.js';
import { getters } from '@app/store/index.js';
import { DockerService } from '@app/unraid-api/graph/resolvers/docker/docker.service.js';
import { getDockerClient } from '@app/unraid-api/graph/resolvers/docker/utils/docker-client.js';
enum DockerEventAction {
DIE = 'die',
@@ -66,7 +67,7 @@ export class DockerEventService implements OnModuleDestroy, OnModuleInit {
];
constructor(private readonly dockerService: DockerService) {
this.client = this.dockerService.getDockerClient();
this.client = getDockerClient();
}
async onModuleInit() {

View File

@@ -0,0 +1,144 @@
import { Test, TestingModule } from '@nestjs/testing';
import { beforeEach, describe, expect, it, vi } from 'vitest';
import { AppError } from '@app/core/errors/app-error.js';
import { DockerLogService } from '@app/unraid-api/graph/resolvers/docker/docker-log.service.js';
import { DockerContainerLogs } from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
// Mock dependencies
const mockExeca = vi.fn();
vi.mock('execa', () => ({
execa: (cmd: string, args: string[]) => mockExeca(cmd, args),
}));
const { mockDockerInstance, mockGetContainer, mockContainer } = vi.hoisted(() => {
const mockContainer = {
inspect: vi.fn(),
};
const mockGetContainer = vi.fn().mockReturnValue(mockContainer);
const mockDockerInstance = {
getContainer: mockGetContainer,
};
return { mockDockerInstance, mockGetContainer, mockContainer };
});
vi.mock('@app/unraid-api/graph/resolvers/docker/utils/docker-client.js', () => ({
getDockerClient: vi.fn().mockReturnValue(mockDockerInstance),
}));
const { statMock } = vi.hoisted(() => ({
statMock: vi.fn().mockResolvedValue({ size: 0 }),
}));
vi.mock('fs/promises', () => ({
stat: statMock,
}));
describe('DockerLogService', () => {
let service: DockerLogService;
beforeEach(async () => {
mockExeca.mockReset();
mockGetContainer.mockReset();
mockGetContainer.mockReturnValue(mockContainer);
mockContainer.inspect.mockReset();
statMock.mockReset();
statMock.mockResolvedValue({ size: 0 });
const module: TestingModule = await Test.createTestingModule({
providers: [DockerLogService],
}).compile();
service = module.get<DockerLogService>(DockerLogService);
});
it('should be defined', () => {
expect(service).toBeDefined();
});
describe('getContainerLogSizes', () => {
it('should get container log sizes using dockerode inspect', async () => {
mockContainer.inspect.mockResolvedValue({
LogPath: '/var/lib/docker/containers/id/id-json.log',
});
statMock.mockResolvedValue({ size: 1024 });
const sizes = await service.getContainerLogSizes(['test-container']);
expect(mockGetContainer).toHaveBeenCalledWith('test-container');
expect(mockContainer.inspect).toHaveBeenCalled();
expect(statMock).toHaveBeenCalledWith('/var/lib/docker/containers/id/id-json.log');
expect(sizes.get('test-container')).toBe(1024);
});
it('should return 0 for missing log path', async () => {
mockContainer.inspect.mockResolvedValue({}); // No LogPath
const sizes = await service.getContainerLogSizes(['test-container']);
expect(sizes.get('test-container')).toBe(0);
});
it('should handle inspect errors gracefully', async () => {
mockContainer.inspect.mockRejectedValue(new Error('Inspect failed'));
const sizes = await service.getContainerLogSizes(['test-container']);
expect(sizes.get('test-container')).toBe(0);
});
});
describe('getContainerLogs', () => {
it('should fetch logs via docker CLI', async () => {
mockExeca.mockResolvedValue({ stdout: '2023-01-01T00:00:00Z Log message\n' });
const result = await service.getContainerLogs('test-id');
expect(mockExeca).toHaveBeenCalledWith('docker', [
'logs',
'--timestamps',
'--tail',
'200',
'test-id',
]);
expect(result.lines).toHaveLength(1);
expect(result.lines[0].message).toBe('Log message');
});
it('should respect tail option', async () => {
mockExeca.mockResolvedValue({ stdout: '' });
await service.getContainerLogs('test-id', { tail: 50 });
expect(mockExeca).toHaveBeenCalledWith('docker', [
'logs',
'--timestamps',
'--tail',
'50',
'test-id',
]);
});
it('should respect since option', async () => {
mockExeca.mockResolvedValue({ stdout: '' });
const since = new Date('2023-01-01T00:00:00Z');
await service.getContainerLogs('test-id', { since });
expect(mockExeca).toHaveBeenCalledWith('docker', [
'logs',
'--timestamps',
'--tail',
'200',
'--since',
since.toISOString(),
'test-id',
]);
});
it('should throw AppError on execa failure', async () => {
mockExeca.mockRejectedValue(new Error('Docker error'));
await expect(service.getContainerLogs('test-id')).rejects.toThrow(AppError);
});
});
});

View File

@@ -0,0 +1,149 @@
import { Injectable, Logger } from '@nestjs/common';
import { stat } from 'fs/promises';
import type { ExecaError } from 'execa';
import { execa } from 'execa';
import { AppError } from '@app/core/errors/app-error.js';
import {
DockerContainerLogLine,
DockerContainerLogs,
} from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { getDockerClient } from '@app/unraid-api/graph/resolvers/docker/utils/docker-client.js';
@Injectable()
export class DockerLogService {
private readonly logger = new Logger(DockerLogService.name);
private readonly client = getDockerClient();
private static readonly DEFAULT_LOG_TAIL = 200;
private static readonly MAX_LOG_TAIL = 2000;
public async getContainerLogSizes(containerNames: string[]): Promise<Map<string, number>> {
const logSizes = new Map<string, number>();
if (!Array.isArray(containerNames) || containerNames.length === 0) {
return logSizes;
}
for (const rawName of containerNames) {
const normalized = (rawName ?? '').replace(/^\//, '');
if (!normalized) {
logSizes.set(normalized, 0);
continue;
}
try {
const container = this.client.getContainer(normalized);
const info = await container.inspect();
const logPath = info.LogPath;
if (!logPath || typeof logPath !== 'string' || !logPath.length) {
logSizes.set(normalized, 0);
continue;
}
const stats = await stat(logPath).catch(() => null);
logSizes.set(normalized, stats?.size ?? 0);
} catch (error) {
const message =
error instanceof Error ? error.message : String(error ?? 'unknown error');
this.logger.debug(
`Failed to determine log size for container ${normalized}: ${message}`
);
logSizes.set(normalized, 0);
}
}
return logSizes;
}
public async getContainerLogs(
id: string,
options?: { since?: Date | null; tail?: number | null }
): Promise<DockerContainerLogs> {
const normalizedId = (id ?? '').trim();
if (!normalizedId) {
throw new AppError('Container id is required to fetch logs.', 400);
}
const tail = this.normalizeLogTail(options?.tail);
const args = ['logs', '--timestamps', '--tail', String(tail)];
const sinceIso = options?.since instanceof Date ? options.since.toISOString() : null;
if (sinceIso) {
args.push('--since', sinceIso);
}
args.push(normalizedId);
try {
const { stdout } = await execa('docker', args);
const lines = this.parseDockerLogOutput(stdout);
const cursor =
lines.length > 0 ? lines[lines.length - 1].timestamp : (options?.since ?? null);
return {
containerId: normalizedId,
lines,
cursor: cursor ?? undefined,
};
} catch (error: unknown) {
const execaError = error as ExecaError;
const stderr = typeof execaError?.stderr === 'string' ? execaError.stderr.trim() : '';
const message = stderr || execaError?.message || 'Unknown error';
this.logger.error(
`Failed to fetch logs for container ${normalizedId}: ${message}`,
execaError
);
throw new AppError(`Failed to fetch logs for container ${normalizedId}.`);
}
}
private normalizeLogTail(tail?: number | null): number {
if (typeof tail !== 'number' || Number.isNaN(tail)) {
return DockerLogService.DEFAULT_LOG_TAIL;
}
const coerced = Math.floor(tail);
if (!Number.isFinite(coerced) || coerced <= 0) {
return DockerLogService.DEFAULT_LOG_TAIL;
}
return Math.min(coerced, DockerLogService.MAX_LOG_TAIL);
}
private parseDockerLogOutput(output: string): DockerContainerLogLine[] {
if (!output) {
return [];
}
return output
.split(/\r?\n/g)
.map((line) => line.trim())
.filter((line) => line.length > 0)
.map((line) => this.parseDockerLogLine(line))
.filter((entry): entry is DockerContainerLogLine => Boolean(entry));
}
private parseDockerLogLine(line: string): DockerContainerLogLine | null {
const trimmed = line.trim();
if (!trimmed.length) {
return null;
}
const firstSpaceIndex = trimmed.indexOf(' ');
if (firstSpaceIndex === -1) {
return {
timestamp: new Date(),
message: trimmed,
};
}
const potentialTimestamp = trimmed.slice(0, firstSpaceIndex);
const message = trimmed.slice(firstSpaceIndex + 1);
const parsedTimestamp = new Date(potentialTimestamp);
if (Number.isNaN(parsedTimestamp.getTime())) {
return {
timestamp: new Date(),
message: trimmed,
};
}
return {
timestamp: parsedTimestamp,
message,
};
}
}

View File

@@ -16,6 +16,14 @@ export class DockerManifestService {
return this.dockerPhpService.refreshDigestsViaPhp();
});
/**
* Reads the cached update status file and returns the parsed contents.
* Exposed so other services can reuse the parsed data when evaluating many containers.
*/
async getCachedUpdateStatuses(): Promise<Record<string, CachedStatusEntry>> {
return this.dockerPhpService.readCachedUpdateStatus();
}
/**
* Recomputes local/remote docker container digests and writes them to /var/lib/docker/unraid-update-status.json
* @param mutex - Optional mutex to use for the operation. If not provided, a default mutex will be used.
@@ -41,7 +49,22 @@ export class DockerManifestService {
cacheData ??= await this.dockerPhpService.readCachedUpdateStatus();
const containerData = cacheData[taggedRef];
if (!containerData) return null;
return containerData.status?.toLowerCase() === 'true';
const normalize = (digest?: string | null) => {
const value = digest?.trim().toLowerCase();
return value && value !== 'undef' ? value : null;
};
const localDigest = normalize(containerData.local);
const remoteDigest = normalize(containerData.remote);
if (localDigest && remoteDigest) {
return localDigest !== remoteDigest;
}
const status = containerData.status?.toLowerCase();
if (status === 'true') return true;
if (status === 'false') return false;
return null;
}
/**

View File

@@ -0,0 +1,89 @@
import { CACHE_MANAGER } from '@nestjs/cache-manager';
import { Test, TestingModule } from '@nestjs/testing';
import { beforeEach, describe, expect, it, vi } from 'vitest';
import { DockerNetworkService } from '@app/unraid-api/graph/resolvers/docker/docker-network.service.js';
const { mockDockerInstance, mockListNetworks } = vi.hoisted(() => {
const mockListNetworks = vi.fn();
const mockDockerInstance = {
listNetworks: mockListNetworks,
};
return { mockDockerInstance, mockListNetworks };
});
vi.mock('@app/unraid-api/graph/resolvers/docker/utils/docker-client.js', () => ({
getDockerClient: vi.fn().mockReturnValue(mockDockerInstance),
}));
const mockCacheManager = {
get: vi.fn(),
set: vi.fn(),
};
describe('DockerNetworkService', () => {
let service: DockerNetworkService;
beforeEach(async () => {
mockListNetworks.mockReset();
mockCacheManager.get.mockReset();
mockCacheManager.set.mockReset();
const module: TestingModule = await Test.createTestingModule({
providers: [
DockerNetworkService,
{
provide: CACHE_MANAGER,
useValue: mockCacheManager,
},
],
}).compile();
service = module.get<DockerNetworkService>(DockerNetworkService);
});
it('should be defined', () => {
expect(service).toBeDefined();
});
describe('getNetworks', () => {
it('should return cached networks if available and not skipped', async () => {
const cached = [{ id: 'net1', name: 'test-net' }];
mockCacheManager.get.mockResolvedValue(cached);
const result = await service.getNetworks({ skipCache: false });
expect(result).toEqual(cached);
expect(mockListNetworks).not.toHaveBeenCalled();
});
it('should fetch networks from docker if cache skipped', async () => {
const rawNetworks = [
{
Id: 'net1',
Name: 'test-net',
Driver: 'bridge',
},
];
mockListNetworks.mockResolvedValue(rawNetworks);
const result = await service.getNetworks({ skipCache: true });
expect(result).toHaveLength(1);
expect(result[0].id).toBe('net1');
expect(mockListNetworks).toHaveBeenCalled();
expect(mockCacheManager.set).toHaveBeenCalledWith(
DockerNetworkService.NETWORK_CACHE_KEY,
expect.anything(),
expect.anything()
);
});
it('should fetch networks from docker if cache miss', async () => {
mockCacheManager.get.mockResolvedValue(undefined);
mockListNetworks.mockResolvedValue([]);
await service.getNetworks({ skipCache: false });
expect(mockListNetworks).toHaveBeenCalled();
});
});
});

View File

@@ -0,0 +1,69 @@
import { CACHE_MANAGER } from '@nestjs/cache-manager';
import { Inject, Injectable, Logger } from '@nestjs/common';
import { type Cache } from 'cache-manager';
import { catchHandlers } from '@app/core/utils/misc/catch-handlers.js';
import { DockerNetwork } from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { getDockerClient } from '@app/unraid-api/graph/resolvers/docker/utils/docker-client.js';
interface NetworkListingOptions {
skipCache: boolean;
}
@Injectable()
export class DockerNetworkService {
private readonly logger = new Logger(DockerNetworkService.name);
private readonly client = getDockerClient();
public static readonly NETWORK_CACHE_KEY = 'docker_networks';
private static readonly CACHE_TTL_SECONDS = 60;
constructor(@Inject(CACHE_MANAGER) private cacheManager: Cache) {}
/**
* Get all Docker networks
* @returns All the in/active Docker networks on the system.
*/
public async getNetworks({ skipCache }: NetworkListingOptions): Promise<DockerNetwork[]> {
if (!skipCache) {
const cachedNetworks = await this.cacheManager.get<DockerNetwork[]>(
DockerNetworkService.NETWORK_CACHE_KEY
);
if (cachedNetworks) {
this.logger.debug('Using docker network cache');
return cachedNetworks;
}
}
this.logger.debug('Updating docker network cache');
const rawNetworks = await this.client.listNetworks().catch(catchHandlers.docker);
const networks = rawNetworks.map(
(network) =>
({
name: network.Name || '',
id: network.Id || '',
created: network.Created || '',
scope: network.Scope || '',
driver: network.Driver || '',
enableIPv6: network.EnableIPv6 || false,
ipam: network.IPAM || {},
internal: network.Internal || false,
attachable: network.Attachable || false,
ingress: network.Ingress || false,
configFrom: network.ConfigFrom || {},
configOnly: network.ConfigOnly || false,
containers: network.Containers || {},
options: network.Options || {},
labels: network.Labels || {},
}) as DockerNetwork
);
await this.cacheManager.set(
DockerNetworkService.NETWORK_CACHE_KEY,
networks,
DockerNetworkService.CACHE_TTL_SECONDS * 1000
);
return networks;
}
}

View File

@@ -0,0 +1,84 @@
import { Test, TestingModule } from '@nestjs/testing';
import { beforeEach, describe, expect, it, vi } from 'vitest';
import { DockerPortService } from '@app/unraid-api/graph/resolvers/docker/docker-port.service.js';
import {
ContainerPortType,
DockerContainer,
} from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
vi.mock('@app/core/utils/network.js', () => ({
getLanIp: vi.fn().mockReturnValue('192.168.1.100'),
}));
describe('DockerPortService', () => {
let service: DockerPortService;
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [DockerPortService],
}).compile();
service = module.get<DockerPortService>(DockerPortService);
});
it('should be defined', () => {
expect(service).toBeDefined();
});
describe('deduplicateContainerPorts', () => {
it('should deduplicate ports', () => {
const ports = [
{ PrivatePort: 80, PublicPort: 80, Type: 'tcp' },
{ PrivatePort: 80, PublicPort: 80, Type: 'tcp' },
{ PrivatePort: 443, PublicPort: 443, Type: 'tcp' },
];
// @ts-expect-error - types are loosely mocked
const result = service.deduplicateContainerPorts(ports);
expect(result).toHaveLength(2);
});
});
describe('calculateConflicts', () => {
it('should detect port conflicts', () => {
const containers = [
{
id: 'c1',
names: ['/web1'],
ports: [{ privatePort: 80, type: ContainerPortType.TCP }],
},
{
id: 'c2',
names: ['/web2'],
ports: [{ privatePort: 80, type: ContainerPortType.TCP }],
},
] as DockerContainer[];
const result = service.calculateConflicts(containers);
expect(result.containerPorts).toHaveLength(1);
expect(result.containerPorts[0].privatePort).toBe(80);
expect(result.containerPorts[0].containers).toHaveLength(2);
});
it('should detect lan port conflicts', () => {
const containers = [
{
id: 'c1',
names: ['/web1'],
ports: [{ publicPort: 8080, type: ContainerPortType.TCP }],
},
{
id: 'c2',
names: ['/web2'],
ports: [{ publicPort: 8080, type: ContainerPortType.TCP }],
},
] as DockerContainer[];
const result = service.calculateConflicts(containers);
expect(result.lanPorts).toHaveLength(1);
expect(result.lanPorts[0].publicPort).toBe(8080);
expect(result.lanPorts[0].containers).toHaveLength(2);
});
});
});

View File

@@ -0,0 +1,178 @@
import { Injectable } from '@nestjs/common';
import Docker from 'dockerode';
import { getLanIp } from '@app/core/utils/network.js';
import {
ContainerPortType,
DockerContainer,
DockerContainerPortConflict,
DockerLanPortConflict,
DockerPortConflictContainer,
DockerPortConflicts,
} from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
@Injectable()
export class DockerPortService {
public deduplicateContainerPorts(
ports: Docker.ContainerInfo['Ports'] | undefined
): Docker.ContainerInfo['Ports'] {
if (!Array.isArray(ports)) {
return [];
}
const seen = new Set<string>();
const uniquePorts: Docker.ContainerInfo['Ports'] = [];
for (const port of ports) {
const key = `${port.PrivatePort ?? ''}-${port.PublicPort ?? ''}-${(port.Type ?? '').toLowerCase()}`;
if (seen.has(key)) {
continue;
}
seen.add(key);
uniquePorts.push(port);
}
return uniquePorts;
}
public calculateConflicts(containers: DockerContainer[]): DockerPortConflicts {
return {
containerPorts: this.buildContainerPortConflicts(containers),
lanPorts: this.buildLanPortConflicts(containers),
};
}
private buildPortConflictContainerRef(container: DockerContainer): DockerPortConflictContainer {
const primaryName = this.getContainerPrimaryName(container);
const fallback = container.names?.[0] ?? container.id;
const normalized = typeof fallback === 'string' ? fallback.replace(/^\//, '') : container.id;
return {
id: container.id,
name: primaryName || normalized,
};
}
private getContainerPrimaryName(container: DockerContainer): string | null {
const names = container.names;
const firstName = names?.[0] ?? '';
return firstName ? firstName.replace(/^\//, '') : null;
}
private buildContainerPortConflicts(containers: DockerContainer[]): DockerContainerPortConflict[] {
const groups = new Map<
string,
{
privatePort: number;
type: ContainerPortType;
containers: DockerContainer[];
seen: Set<string>;
}
>();
for (const container of containers) {
if (!Array.isArray(container.ports)) {
continue;
}
for (const port of container.ports) {
if (!port || typeof port.privatePort !== 'number') {
continue;
}
const type = port.type ?? ContainerPortType.TCP;
const key = `${port.privatePort}/${type}`;
let group = groups.get(key);
if (!group) {
group = {
privatePort: port.privatePort,
type,
containers: [],
seen: new Set<string>(),
};
groups.set(key, group);
}
if (group.seen.has(container.id)) {
continue;
}
group.seen.add(container.id);
group.containers.push(container);
}
}
return Array.from(groups.values())
.filter((group) => group.containers.length > 1)
.map((group) => ({
privatePort: group.privatePort,
type: group.type,
containers: group.containers.map((container) =>
this.buildPortConflictContainerRef(container)
),
}))
.sort((a, b) => {
if (a.privatePort !== b.privatePort) {
return a.privatePort - b.privatePort;
}
return a.type.localeCompare(b.type);
});
}
private buildLanPortConflicts(containers: DockerContainer[]): DockerLanPortConflict[] {
const lanIp = getLanIp();
const groups = new Map<
string,
{
lanIpPort: string;
publicPort: number;
type: ContainerPortType;
containers: DockerContainer[];
seen: Set<string>;
}
>();
for (const container of containers) {
if (!Array.isArray(container.ports)) {
continue;
}
for (const port of container.ports) {
if (!port || typeof port.publicPort !== 'number') {
continue;
}
const type = port.type ?? ContainerPortType.TCP;
const lanIpPort = lanIp ? `${lanIp}:${port.publicPort}` : `${port.publicPort}`;
const key = `${lanIpPort}/${type}`;
let group = groups.get(key);
if (!group) {
group = {
lanIpPort,
publicPort: port.publicPort,
type,
containers: [],
seen: new Set<string>(),
};
groups.set(key, group);
}
if (group.seen.has(container.id)) {
continue;
}
group.seen.add(container.id);
group.containers.push(container);
}
}
return Array.from(groups.values())
.filter((group) => group.containers.length > 1)
.map((group) => ({
lanIpPort: group.lanIpPort,
publicPort: group.publicPort,
type: group.type,
containers: group.containers.map((container) =>
this.buildPortConflictContainerRef(container)
),
}))
.sort((a, b) => {
if ((a.publicPort ?? 0) !== (b.publicPort ?? 0)) {
return (a.publicPort ?? 0) - (b.publicPort ?? 0);
}
return a.type.localeCompare(b.type);
});
}
}

View File

@@ -0,0 +1,117 @@
import { Injectable, Logger, OnModuleDestroy } from '@nestjs/common';
import { createInterface } from 'readline';
import { execa } from 'execa';
import { pubsub, PUBSUB_CHANNEL } from '@app/core/pubsub.js';
import { catchHandlers } from '@app/core/utils/misc/catch-handlers.js';
import { DockerContainerStats } from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
@Injectable()
export class DockerStatsService implements OnModuleDestroy {
private readonly logger = new Logger(DockerStatsService.name);
private statsProcess: ReturnType<typeof execa> | null = null;
private readonly STATS_FORMAT =
'{{.ID}};{{.CPUPerc}};{{.MemUsage}};{{.MemPerc}};{{.NetIO}};{{.BlockIO}}';
onModuleDestroy() {
this.stopStatsStream();
}
public startStatsStream() {
if (this.statsProcess) {
return;
}
this.logger.log('Starting docker stats stream');
try {
this.statsProcess = execa('docker', ['stats', '--format', this.STATS_FORMAT, '--no-trunc'], {
all: true,
reject: false, // Don't throw on exit code != 0, handle via parsing/events
});
if (this.statsProcess.stdout) {
const rl = createInterface({
input: this.statsProcess.stdout,
crlfDelay: Infinity,
});
rl.on('line', (line) => {
if (!line.trim()) return;
this.processStatsLine(line);
});
rl.on('error', (err) => {
this.logger.error('Error reading docker stats stream', err);
});
}
if (this.statsProcess.stderr) {
this.statsProcess.stderr.on('data', (data: Buffer) => {
// Log docker stats errors but don't crash
this.logger.debug(`Docker stats stderr: ${data.toString()}`);
});
}
// Handle process exit
this.statsProcess
.then((result) => {
if (result.failed && !result.signal) {
this.logger.error('Docker stats process exited with error', result.shortMessage);
this.stopStatsStream();
}
})
.catch((err) => {
if (!err.killed) {
this.logger.error('Docker stats process ended unexpectedly', err);
this.stopStatsStream();
}
});
} catch (error) {
this.logger.error('Failed to start docker stats', error);
catchHandlers.docker(error as Error);
}
}
public stopStatsStream() {
if (this.statsProcess) {
this.logger.log('Stopping docker stats stream');
this.statsProcess.kill();
this.statsProcess = null;
}
}
private processStatsLine(line: string) {
try {
// format: ID;CPUPerc;MemUsage;MemPerc;NetIO;BlockIO
// Example: 123abcde;0.00%;10MiB / 100MiB;10.00%;1kB / 2kB;0B / 0B
// Remove ANSI escape codes if any (docker stats sometimes includes them)
// eslint-disable-next-line no-control-regex
const cleanLine = line.replace(/\x1B\[[0-9;]*[mK]/g, '');
const parts = cleanLine.split(';');
if (parts.length < 6) return;
const [id, cpuPercStr, memUsage, memPercStr, netIO, blockIO] = parts;
const stats: DockerContainerStats = {
id,
cpuPercent: this.parsePercentage(cpuPercStr),
memUsage,
memPercent: this.parsePercentage(memPercStr),
netIO,
blockIO,
};
pubsub.publish(PUBSUB_CHANNEL.DOCKER_STATS, { dockerContainerStats: stats });
} catch (error) {
this.logger.debug(`Failed to process stats line: ${line}`, error);
}
}
private parsePercentage(value: string): number {
return parseFloat(value.replace('%', '')) || 0;
}
}

View File

@@ -0,0 +1,358 @@
import { CACHE_MANAGER } from '@nestjs/cache-manager';
import { Inject, Injectable, Logger } from '@nestjs/common';
import { type Cache } from 'cache-manager';
import { TailscaleStatus } from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { getDockerClient } from '@app/unraid-api/graph/resolvers/docker/utils/docker-client.js';
interface RawTailscaleStatus {
Self: {
Online: boolean;
DNSName: string;
TailscaleIPs?: string[];
Relay?: string;
PrimaryRoutes?: string[];
ExitNodeOption?: boolean;
KeyExpiry?: string;
};
ExitNodeStatus?: {
Online: boolean;
TailscaleIPs?: string[];
};
Version: string;
BackendState?: string;
AuthURL?: string;
}
interface DerpRegion {
RegionCode: string;
RegionName: string;
}
interface DerpMap {
Regions: Record<string, DerpRegion>;
}
interface TailscaleVersionResponse {
TarballsVersion: string;
}
@Injectable()
export class DockerTailscaleService {
private readonly logger = new Logger(DockerTailscaleService.name);
private readonly docker = getDockerClient();
private static readonly DERP_MAP_CACHE_KEY = 'tailscale_derp_map';
private static readonly VERSION_CACHE_KEY = 'tailscale_latest_version';
private static readonly STATUS_CACHE_PREFIX = 'tailscale_status_';
private static readonly DERP_MAP_TTL = 86400000; // 24 hours in ms
private static readonly VERSION_TTL = 86400000; // 24 hours in ms
private static readonly STATUS_TTL = 30000; // 30 seconds in ms
constructor(@Inject(CACHE_MANAGER) private cacheManager: Cache) {}
async getTailscaleStatus(
containerName: string,
labels: Record<string, string>,
forceRefresh = false
): Promise<TailscaleStatus | null> {
const hostname = labels['net.unraid.docker.tailscale.hostname'];
const webUiTemplate = labels['net.unraid.docker.tailscale.webui'];
const cacheKey = `${DockerTailscaleService.STATUS_CACHE_PREFIX}${containerName}`;
if (forceRefresh) {
await this.cacheManager.del(cacheKey);
} else {
const cached = await this.cacheManager.get<TailscaleStatus>(cacheKey);
if (cached) {
return cached;
}
}
const rawStatus = await this.execTailscaleStatus(containerName);
if (!rawStatus) {
// Don't cache failures - return without caching so next request retries
return {
online: false,
hostname: hostname || undefined,
isExitNode: false,
updateAvailable: false,
keyExpired: false,
};
}
const [derpMap, latestVersion] = await Promise.all([this.getDerpMap(), this.getLatestVersion()]);
const version = rawStatus.Version?.split('-')[0];
const updateAvailable = Boolean(
version && latestVersion && this.isVersionLessThan(version, latestVersion)
);
const dnsName = rawStatus.Self.DNSName;
const actualHostname = dnsName ? dnsName.split('.')[0] : undefined;
let relayName: string | undefined;
if (rawStatus.Self.Relay && derpMap) {
relayName = this.mapRelayToRegion(rawStatus.Self.Relay, derpMap);
}
let keyExpiry: Date | undefined;
let keyExpiryDays: number | undefined;
let keyExpired = false;
if (rawStatus.Self.KeyExpiry) {
keyExpiry = new Date(rawStatus.Self.KeyExpiry);
const now = new Date();
const diffMs = keyExpiry.getTime() - now.getTime();
keyExpiryDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));
keyExpired = diffMs < 0;
}
const webUiUrl = webUiTemplate ? this.resolveWebUiUrl(webUiTemplate, rawStatus) : undefined;
const status: TailscaleStatus = {
online: rawStatus.Self.Online,
version,
latestVersion: latestVersion ?? undefined,
updateAvailable,
hostname,
dnsName: dnsName || undefined,
relay: rawStatus.Self.Relay,
relayName,
tailscaleIps: rawStatus.Self.TailscaleIPs,
primaryRoutes: rawStatus.Self.PrimaryRoutes,
isExitNode: Boolean(rawStatus.Self.ExitNodeOption),
exitNodeStatus: rawStatus.ExitNodeStatus
? {
online: rawStatus.ExitNodeStatus.Online,
tailscaleIps: rawStatus.ExitNodeStatus.TailscaleIPs,
}
: undefined,
webUiUrl,
keyExpiry,
keyExpiryDays,
keyExpired,
backendState: rawStatus.BackendState,
authUrl: rawStatus.AuthURL,
};
await this.cacheManager.set(cacheKey, status, DockerTailscaleService.STATUS_TTL);
return status;
}
async getDerpMap(): Promise<DerpMap | null> {
const cached = await this.cacheManager.get<DerpMap>(DockerTailscaleService.DERP_MAP_CACHE_KEY);
if (cached) {
return cached;
}
try {
const response = await fetch('https://login.tailscale.com/derpmap/default', {
signal: AbortSignal.timeout(3000),
});
if (!response.ok) {
this.logger.warn(`Failed to fetch DERP map: ${response.status}`);
return null;
}
const data = (await response.json()) as DerpMap;
await this.cacheManager.set(
DockerTailscaleService.DERP_MAP_CACHE_KEY,
data,
DockerTailscaleService.DERP_MAP_TTL
);
return data;
} catch (error) {
this.logger.warn('Failed to fetch DERP map', error);
return null;
}
}
async getLatestVersion(): Promise<string | null> {
const cached = await this.cacheManager.get<string>(DockerTailscaleService.VERSION_CACHE_KEY);
if (cached) {
return cached;
}
try {
const response = await fetch('https://pkgs.tailscale.com/stable/?mode=json', {
signal: AbortSignal.timeout(3000),
});
if (!response.ok) {
this.logger.warn(`Failed to fetch Tailscale version: ${response.status}`);
return null;
}
const data = (await response.json()) as TailscaleVersionResponse;
const version = data.TarballsVersion;
await this.cacheManager.set(
DockerTailscaleService.VERSION_CACHE_KEY,
version,
DockerTailscaleService.VERSION_TTL
);
return version;
} catch (error) {
this.logger.warn('Failed to fetch Tailscale version', error);
return null;
}
}
private async execTailscaleStatus(containerName: string): Promise<RawTailscaleStatus | null> {
try {
const cleanName = containerName.replace(/^\//, '');
const container = this.docker.getContainer(cleanName);
const exec = await container.exec({
Cmd: ['/bin/sh', '-c', 'tailscale status --json'],
AttachStdout: true,
AttachStderr: true,
});
const stream = await exec.start({ hijack: true, stdin: false });
const output = await this.collectStreamOutput(stream);
this.logger.debug(`Raw tailscale output for ${cleanName}: ${output.substring(0, 500)}...`);
if (!output.trim()) {
this.logger.warn(`Empty tailscale output for ${cleanName}`);
return null;
}
const parsed = JSON.parse(output) as RawTailscaleStatus;
this.logger.debug(
`Parsed tailscale status for ${cleanName}: DNSName=${parsed.Self?.DNSName}, Online=${parsed.Self?.Online}`
);
return parsed;
} catch (error) {
this.logger.debug(`Failed to get Tailscale status for ${containerName}: ${error}`);
return null;
}
}
private async collectStreamOutput(stream: NodeJS.ReadableStream): Promise<string> {
return new Promise((resolve, reject) => {
const chunks: Buffer[] = [];
stream.on('data', (chunk: Buffer) => {
chunks.push(chunk);
});
stream.on('end', () => {
const buffer = Buffer.concat(chunks);
const output = this.demuxDockerStream(buffer);
resolve(output);
});
stream.on('error', reject);
});
}
private demuxDockerStream(buffer: Buffer): string {
// Check if the buffer looks like it starts with JSON (not multiplexed)
// Docker multiplexed streams start with stream type byte (0, 1, or 2)
// followed by 3 zero bytes, then 4-byte size
if (buffer.length > 0) {
const firstChar = buffer.toString('utf8', 0, 1);
if (firstChar === '{' || firstChar === '[') {
// Already plain text/JSON, not multiplexed
return buffer.toString('utf8');
}
}
let offset = 0;
const output: string[] = [];
while (offset < buffer.length) {
if (offset + 8 > buffer.length) break;
const streamType = buffer.readUInt8(offset);
// Valid stream types are 0 (stdin), 1 (stdout), 2 (stderr)
if (streamType > 2) {
// Doesn't look like multiplexed stream, treat as raw
return buffer.toString('utf8');
}
const size = buffer.readUInt32BE(offset + 4);
offset += 8;
if (offset + size > buffer.length) break;
const chunk = buffer.slice(offset, offset + size).toString('utf8');
output.push(chunk);
offset += size;
}
return output.join('');
}
private mapRelayToRegion(relayCode: string, derpMap: DerpMap): string | undefined {
for (const region of Object.values(derpMap.Regions)) {
if (region.RegionCode === relayCode) {
return region.RegionName;
}
}
return undefined;
}
private isVersionLessThan(current: string, latest: string): boolean {
const currentParts = current.split('.').map(Number);
const latestParts = latest.split('.').map(Number);
for (let i = 0; i < Math.max(currentParts.length, latestParts.length); i++) {
const curr = currentParts[i] || 0;
const lat = latestParts[i] || 0;
if (curr < lat) return true;
if (curr > lat) return false;
}
return false;
}
private resolveWebUiUrl(template: string, status: RawTailscaleStatus): string | undefined {
if (!template) return undefined;
let url = template;
const dnsName = status.Self.DNSName?.replace(/\.$/, '');
// Handle [hostname][magicdns] or [hostname] - use MagicDNS name and port 443
if (url.includes('[hostname]')) {
if (dnsName) {
// Replace [hostname][magicdns] with the full DNS name
url = url.replace('[hostname][magicdns]', dnsName);
// Replace standalone [hostname] with the DNS name
url = url.replace('[hostname]', dnsName);
// When using MagicDNS, also replace [IP] with DNS name
url = url.replace(/\[IP\]/g, dnsName);
// When using MagicDNS with Serve/Funnel, port is always 443
url = url.replace(/\[PORT:\d+\]/g, '443');
} else {
// DNS name not available, can't resolve
return undefined;
}
} else if (url.includes('[noserve]')) {
// Handle [noserve] - use direct Tailscale IP
const ipv4 = status.Self.TailscaleIPs?.find((ip) => !ip.includes(':'));
if (ipv4) {
const portMatch = template.match(/\[PORT:(\d+)\]/);
const port = portMatch ? `:${portMatch[1]}` : '';
url = `http://${ipv4}${port}`;
} else {
return undefined;
}
} else {
// Custom URL - just do basic replacements
if (url.includes('[IP]') && status.Self.TailscaleIPs?.[0]) {
const ipv4 = status.Self.TailscaleIPs.find((ip) => !ip.includes(':'));
url = url.replace(/\[IP\]/g, ipv4 || status.Self.TailscaleIPs[0]);
}
const portMatch = url.match(/\[PORT:(\d+)\]/);
if (portMatch) {
url = url.replace(portMatch[0], portMatch[1]);
}
}
return url;
}
}

View File

@@ -0,0 +1,61 @@
import { Injectable, Logger } from '@nestjs/common';
import { readFile } from 'fs/promises';
import { XMLParser } from 'fast-xml-parser';
@Injectable()
export class DockerTemplateIconService {
private readonly logger = new Logger(DockerTemplateIconService.name);
private readonly xmlParser = new XMLParser({
ignoreAttributes: false,
parseAttributeValue: true,
trimValues: true,
});
async getIconFromTemplate(templatePath: string): Promise<string | null> {
try {
const content = await readFile(templatePath, 'utf-8');
const parsed = this.xmlParser.parse(content);
if (!parsed.Container) {
return null;
}
return parsed.Container.Icon || null;
} catch (error) {
this.logger.debug(
`Failed to read icon from template ${templatePath}: ${error instanceof Error ? error.message : 'Unknown error'}`
);
return null;
}
}
async getIconsForContainers(
containers: Array<{ id: string; templatePath?: string }>
): Promise<Map<string, string>> {
const iconMap = new Map<string, string>();
const iconPromises = containers.map(async (container) => {
if (!container.templatePath) {
return null;
}
const icon = await this.getIconFromTemplate(container.templatePath);
if (icon) {
return { id: container.id, icon };
}
return null;
});
const results = await Promise.all(iconPromises);
for (const result of results) {
if (result) {
iconMap.set(result.id, result.icon);
}
}
this.logger.debug(`Loaded ${iconMap.size} icons from ${containers.length} containers`);
return iconMap;
}
}

View File

@@ -0,0 +1,16 @@
import { Field, Int, ObjectType } from '@nestjs/graphql';
@ObjectType()
export class DockerTemplateSyncResult {
@Field(() => Int)
scanned!: number;
@Field(() => Int)
matched!: number;
@Field(() => Int)
skipped!: number;
@Field(() => [String])
errors!: string[];
}

View File

@@ -0,0 +1,425 @@
import { Test, TestingModule } from '@nestjs/testing';
import { mkdir, rm, writeFile } from 'fs/promises';
import { join } from 'path';
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
import { DockerConfigService } from '@app/unraid-api/graph/resolvers/docker/docker-config.service.js';
import { DockerTemplateScannerService } from '@app/unraid-api/graph/resolvers/docker/docker-template-scanner.service.js';
import { DockerContainer } from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { DockerService } from '@app/unraid-api/graph/resolvers/docker/docker.service.js';
vi.mock('@app/environment.js', () => ({
PATHS_DOCKER_TEMPLATES: ['/tmp/test-templates'],
ENABLE_NEXT_DOCKER_RELEASE: true,
}));
describe('DockerTemplateScannerService', () => {
let service: DockerTemplateScannerService;
let dockerConfigService: DockerConfigService;
let dockerService: DockerService;
const testTemplateDir = '/tmp/test-templates';
beforeEach(async () => {
await mkdir(testTemplateDir, { recursive: true });
const mockDockerService = {
getContainers: vi.fn(),
};
const mockDockerConfigService = {
getConfig: vi.fn(),
replaceConfig: vi.fn(),
validate: vi.fn((config) => Promise.resolve(config)),
};
const module: TestingModule = await Test.createTestingModule({
providers: [
DockerTemplateScannerService,
{
provide: DockerConfigService,
useValue: mockDockerConfigService,
},
{
provide: DockerService,
useValue: mockDockerService,
},
],
}).compile();
service = module.get<DockerTemplateScannerService>(DockerTemplateScannerService);
dockerConfigService = module.get<DockerConfigService>(DockerConfigService);
dockerService = module.get<DockerService>(DockerService);
});
afterEach(async () => {
await rm(testTemplateDir, { recursive: true, force: true });
});
describe('parseTemplate', () => {
it('should parse valid XML template', async () => {
const templatePath = join(testTemplateDir, 'test.xml');
const templateContent = `<?xml version="1.0"?>
<Container version="2">
<Name>test-container</Name>
<Repository>test/image</Repository>
</Container>`;
await writeFile(templatePath, templateContent);
const result = await (service as any).parseTemplate(templatePath);
expect(result).toEqual({
filePath: templatePath,
name: 'test-container',
repository: 'test/image',
});
});
it('should handle invalid XML gracefully by returning null', async () => {
const templatePath = join(testTemplateDir, 'invalid.xml');
await writeFile(templatePath, 'not xml');
const result = await (service as any).parseTemplate(templatePath);
expect(result).toBeNull();
});
it('should return null for XML without Container element', async () => {
const templatePath = join(testTemplateDir, 'no-container.xml');
const templateContent = `<?xml version="1.0"?><Root></Root>`;
await writeFile(templatePath, templateContent);
const result = await (service as any).parseTemplate(templatePath);
expect(result).toBeNull();
});
});
describe('matchContainerToTemplate', () => {
it('should match by container name (exact match)', () => {
const container: DockerContainer = {
id: 'abc123',
names: ['/test-container'],
image: 'different/image:latest',
} as DockerContainer;
const templates = [
{ filePath: '/path/1', name: 'test-container', repository: 'some/repo' },
{ filePath: '/path/2', name: 'other', repository: 'other/repo' },
];
const result = (service as any).matchContainerToTemplate(container, templates);
expect(result).toEqual(templates[0]);
});
it('should match by repository when name does not match', () => {
const container: DockerContainer = {
id: 'abc123',
names: ['/my-container'],
image: 'test/image:v1.0',
} as DockerContainer;
const templates = [
{ filePath: '/path/1', name: 'different', repository: 'other/repo' },
{ filePath: '/path/2', name: 'also-different', repository: 'test/image' },
];
const result = (service as any).matchContainerToTemplate(container, templates);
expect(result).toEqual(templates[1]);
});
it('should strip tags when matching repository', () => {
const container: DockerContainer = {
id: 'abc123',
names: ['/my-container'],
image: 'test/image:latest',
} as DockerContainer;
const templates = [
{ filePath: '/path/1', name: 'different', repository: 'test/image:v1.0' },
];
const result = (service as any).matchContainerToTemplate(container, templates);
expect(result).toEqual(templates[0]);
});
it('should return null when no match found', () => {
const container: DockerContainer = {
id: 'abc123',
names: ['/my-container'],
image: 'test/image:latest',
} as DockerContainer;
const templates = [{ filePath: '/path/1', name: 'different', repository: 'other/image' }];
const result = (service as any).matchContainerToTemplate(container, templates);
expect(result).toBeNull();
});
it('should be case-insensitive', () => {
const container: DockerContainer = {
id: 'abc123',
names: ['/Test-Container'],
image: 'Test/Image:latest',
} as DockerContainer;
const templates = [
{ filePath: '/path/1', name: 'test-container', repository: 'test/image' },
];
const result = (service as any).matchContainerToTemplate(container, templates);
expect(result).toEqual(templates[0]);
});
});
describe('scanTemplates', () => {
it('should scan templates and create mappings', async () => {
const template1 = join(testTemplateDir, 'redis.xml');
await writeFile(
template1,
`<?xml version="1.0"?>
<Container version="2">
<Name>redis</Name>
<Repository>redis</Repository>
</Container>`
);
const containers: DockerContainer[] = [
{
id: 'container1',
names: ['/redis'],
image: 'redis:latest',
} as DockerContainer,
];
vi.mocked(dockerService.getContainers).mockResolvedValue(containers);
vi.mocked(dockerConfigService.getConfig).mockReturnValue({
updateCheckCronSchedule: '0 6 * * *',
templateMappings: {},
skipTemplatePaths: [],
});
const result = await service.scanTemplates();
expect(result.scanned).toBe(1);
expect(result.matched).toBe(1);
expect(result.errors).toHaveLength(0);
expect(dockerConfigService.replaceConfig).toHaveBeenCalledWith(
expect.objectContaining({
templateMappings: {
redis: template1,
},
})
);
});
it('should skip containers in skipTemplatePaths', async () => {
const template1 = join(testTemplateDir, 'redis.xml');
await writeFile(
template1,
`<?xml version="1.0"?>
<Container version="2">
<Name>redis</Name>
<Repository>redis</Repository>
</Container>`
);
const containers: DockerContainer[] = [
{
id: 'container1',
names: ['/redis'],
image: 'redis:latest',
} as DockerContainer,
];
vi.mocked(dockerService.getContainers).mockResolvedValue(containers);
vi.mocked(dockerConfigService.getConfig).mockReturnValue({
updateCheckCronSchedule: '0 6 * * *',
templateMappings: {},
skipTemplatePaths: ['redis'],
});
const result = await service.scanTemplates();
expect(result.skipped).toBe(1);
expect(result.matched).toBe(0);
});
it('should handle missing template directory gracefully', async () => {
await rm(testTemplateDir, { recursive: true, force: true });
const containers: DockerContainer[] = [];
vi.mocked(dockerService.getContainers).mockResolvedValue(containers);
vi.mocked(dockerConfigService.getConfig).mockReturnValue({
updateCheckCronSchedule: '0 6 * * *',
templateMappings: {},
skipTemplatePaths: [],
});
const result = await service.scanTemplates();
expect(result.scanned).toBe(0);
expect(result.errors.length).toBeGreaterThan(0);
});
it('should handle docker service errors gracefully', async () => {
vi.mocked(dockerService.getContainers).mockRejectedValue(new Error('Docker error'));
vi.mocked(dockerConfigService.getConfig).mockReturnValue({
updateCheckCronSchedule: '0 6 * * *',
templateMappings: {},
skipTemplatePaths: [],
});
const result = await service.scanTemplates();
expect(result.errors.length).toBeGreaterThan(0);
expect(result.errors[0]).toContain('Failed to get containers');
});
it('should set null mapping for unmatched containers', async () => {
const containers: DockerContainer[] = [
{
id: 'container1',
names: ['/unknown'],
image: 'unknown:latest',
} as DockerContainer,
];
vi.mocked(dockerService.getContainers).mockResolvedValue(containers);
vi.mocked(dockerConfigService.getConfig).mockReturnValue({
updateCheckCronSchedule: '0 6 * * *',
templateMappings: {},
skipTemplatePaths: [],
});
await service.scanTemplates();
expect(dockerConfigService.replaceConfig).toHaveBeenCalledWith(
expect.objectContaining({
templateMappings: {
unknown: null,
},
})
);
});
});
describe('syncMissingContainers', () => {
it('should return true and trigger scan when containers are missing mappings', async () => {
const containers: DockerContainer[] = [
{
id: 'container1',
names: ['/redis'],
image: 'redis:latest',
} as DockerContainer,
];
vi.mocked(dockerConfigService.getConfig).mockReturnValue({
updateCheckCronSchedule: '0 6 * * *',
templateMappings: {},
skipTemplatePaths: [],
});
vi.mocked(dockerService.getContainers).mockResolvedValue(containers);
const scanSpy = vi.spyOn(service, 'scanTemplates').mockResolvedValue({
scanned: 0,
matched: 0,
skipped: 0,
errors: [],
});
const result = await service.syncMissingContainers(containers);
expect(result).toBe(true);
expect(scanSpy).toHaveBeenCalled();
});
it('should return false when all containers have mappings', async () => {
const containers: DockerContainer[] = [
{
id: 'container1',
names: ['/redis'],
image: 'redis:latest',
} as DockerContainer,
];
vi.mocked(dockerConfigService.getConfig).mockReturnValue({
updateCheckCronSchedule: '0 6 * * *',
templateMappings: {
redis: '/path/to/template.xml',
},
skipTemplatePaths: [],
});
const scanSpy = vi.spyOn(service, 'scanTemplates');
const result = await service.syncMissingContainers(containers);
expect(result).toBe(false);
expect(scanSpy).not.toHaveBeenCalled();
});
it('should not trigger scan for containers in skip list', async () => {
const containers: DockerContainer[] = [
{
id: 'container1',
names: ['/redis'],
image: 'redis:latest',
} as DockerContainer,
];
vi.mocked(dockerConfigService.getConfig).mockReturnValue({
updateCheckCronSchedule: '0 6 * * *',
templateMappings: {},
skipTemplatePaths: ['redis'],
});
const scanSpy = vi.spyOn(service, 'scanTemplates');
const result = await service.syncMissingContainers(containers);
expect(result).toBe(false);
expect(scanSpy).not.toHaveBeenCalled();
});
});
describe('normalizeContainerName', () => {
it('should remove leading slash', () => {
const result = (service as any).normalizeContainerName('/container-name');
expect(result).toBe('container-name');
});
it('should convert to lowercase', () => {
const result = (service as any).normalizeContainerName('/Container-Name');
expect(result).toBe('container-name');
});
});
describe('normalizeRepository', () => {
it('should strip tag', () => {
const result = (service as any).normalizeRepository('redis:latest');
expect(result).toBe('redis');
});
it('should strip version tag', () => {
const result = (service as any).normalizeRepository('postgres:14.5');
expect(result).toBe('postgres');
});
it('should convert to lowercase', () => {
const result = (service as any).normalizeRepository('Redis:Latest');
expect(result).toBe('redis');
});
it('should handle repository without tag', () => {
const result = (service as any).normalizeRepository('nginx');
expect(result).toBe('nginx');
});
});
});

View File

@@ -0,0 +1,293 @@
import { Injectable, Logger } from '@nestjs/common';
import { Timeout } from '@nestjs/schedule';
import { readdir, readFile } from 'fs/promises';
import { join } from 'path';
import { XMLParser } from 'fast-xml-parser';
import { ENABLE_NEXT_DOCKER_RELEASE, PATHS_DOCKER_TEMPLATES } from '@app/environment.js';
import { DockerConfigService } from '@app/unraid-api/graph/resolvers/docker/docker-config.service.js';
import { DockerTemplateSyncResult } from '@app/unraid-api/graph/resolvers/docker/docker-template-scanner.model.js';
import { DockerContainer } from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { DockerService } from '@app/unraid-api/graph/resolvers/docker/docker.service.js';
interface ParsedTemplate {
filePath: string;
name?: string;
repository?: string;
}
@Injectable()
export class DockerTemplateScannerService {
private readonly logger = new Logger(DockerTemplateScannerService.name);
private readonly xmlParser = new XMLParser({
ignoreAttributes: false,
parseAttributeValue: true,
trimValues: true,
});
constructor(
private readonly dockerConfigService: DockerConfigService,
private readonly dockerService: DockerService
) {}
@Timeout(5_000)
async bootstrapScan(attempt = 1, maxAttempts = 5): Promise<void> {
if (!ENABLE_NEXT_DOCKER_RELEASE) {
return;
}
try {
this.logger.log(`Starting template scan (attempt ${attempt}/${maxAttempts})`);
const result = await this.scanTemplates();
this.logger.log(
`Template scan complete: ${result.matched} matched, ${result.scanned} scanned, ${result.skipped} skipped`
);
} catch (error) {
if (attempt < maxAttempts) {
this.logger.warn(
`Template scan failed (attempt ${attempt}/${maxAttempts}), retrying in 60s: ${error instanceof Error ? error.message : 'Unknown error'}`
);
setTimeout(() => this.bootstrapScan(attempt + 1, maxAttempts), 60_000);
} else {
this.logger.error(
`Template scan failed after ${maxAttempts} attempts: ${error instanceof Error ? error.message : 'Unknown error'}`
);
}
}
}
async syncMissingContainers(containers: DockerContainer[]): Promise<boolean> {
const config = this.dockerConfigService.getConfig();
const mappings = config.templateMappings || {};
const skipSet = new Set(config.skipTemplatePaths || []);
const needsSync = containers.filter((c) => {
const containerName = this.normalizeContainerName(c.names[0]);
return !mappings[containerName] && !skipSet.has(containerName);
});
if (needsSync.length > 0) {
this.logger.log(
`Found ${needsSync.length} containers without template mappings, triggering sync`
);
await this.scanTemplates();
return true;
}
return false;
}
async scanTemplates(): Promise<DockerTemplateSyncResult> {
const result: DockerTemplateSyncResult = {
scanned: 0,
matched: 0,
skipped: 0,
errors: [],
};
const templates = await this.loadAllTemplates(result);
try {
const containers = await this.dockerService.getContainers({ skipCache: true });
const config = this.dockerConfigService.getConfig();
const currentMappings = config.templateMappings || {};
const skipSet = new Set(config.skipTemplatePaths || []);
const newMappings: Record<string, string | null> = { ...currentMappings };
for (const container of containers) {
const containerName = this.normalizeContainerName(container.names[0]);
if (skipSet.has(containerName)) {
result.skipped++;
continue;
}
const match = this.matchContainerToTemplate(container, templates);
if (match) {
newMappings[containerName] = match.filePath;
result.matched++;
} else {
newMappings[containerName] = null;
}
}
await this.updateMappings(newMappings);
} catch (error) {
const errorMsg = `Failed to get containers: ${error instanceof Error ? error.message : 'Unknown error'}`;
this.logger.error(error, 'Failed to get containers');
result.errors.push(errorMsg);
}
return result;
}
async getTemplateDetails(filePath: string): Promise<{
project?: string;
registry?: string;
support?: string;
overview?: string;
icon?: string;
webUi?: string;
shell?: string;
ports?: Array<{ privatePort: number; publicPort: number; type: 'tcp' | 'udp' }>;
} | null> {
try {
const content = await readFile(filePath, 'utf-8');
const parsed = this.xmlParser.parse(content);
if (!parsed.Container) {
return null;
}
const container = parsed.Container;
const ports = this.extractTemplatePorts(container);
return {
project: container.Project,
registry: container.Registry,
support: container.Support,
overview: container.ReadMe || container.Overview,
icon: container.Icon,
webUi: container.WebUI,
shell: container.Shell,
ports,
};
} catch (error) {
this.logger.warn(
`Failed to parse template ${filePath}: ${error instanceof Error ? error.message : 'Unknown error'}`
);
return null;
}
}
private extractTemplatePorts(
container: Record<string, unknown>
): Array<{ privatePort: number; publicPort: number; type: 'tcp' | 'udp' }> {
const ports: Array<{ privatePort: number; publicPort: number; type: 'tcp' | 'udp' }> = [];
const configs = container.Config;
if (!configs) {
return ports;
}
const configArray = Array.isArray(configs) ? configs : [configs];
for (const config of configArray) {
if (!config || typeof config !== 'object') continue;
const attrs = config['@_Type'];
if (attrs !== 'Port') continue;
const target = config['@_Target'];
const mode = config['@_Mode'];
const value = config['#text'];
if (target === undefined || value === undefined) continue;
const privatePort = parseInt(String(target), 10);
const publicPort = parseInt(String(value), 10);
if (isNaN(privatePort) || isNaN(publicPort)) continue;
const type = String(mode).toLowerCase() === 'udp' ? 'udp' : 'tcp';
ports.push({ privatePort, publicPort, type });
}
return ports;
}
private async loadAllTemplates(result: DockerTemplateSyncResult): Promise<ParsedTemplate[]> {
const allTemplates: ParsedTemplate[] = [];
for (const directory of PATHS_DOCKER_TEMPLATES) {
try {
const files = await readdir(directory);
const xmlFiles = files.filter((f) => f.endsWith('.xml'));
result.scanned += xmlFiles.length;
for (const file of xmlFiles) {
const filePath = join(directory, file);
try {
const template = await this.parseTemplate(filePath);
if (template) {
allTemplates.push(template);
}
} catch (error) {
const errorMsg = `Failed to parse template ${filePath}: ${error instanceof Error ? error.message : 'Unknown error'}`;
this.logger.warn(errorMsg);
result.errors.push(errorMsg);
}
}
} catch (error) {
const errorMsg = `Failed to read template directory ${directory}: ${error instanceof Error ? error.message : 'Unknown error'}`;
this.logger.warn(errorMsg);
result.errors.push(errorMsg);
}
}
return allTemplates;
}
private async parseTemplate(filePath: string): Promise<ParsedTemplate | null> {
const content = await readFile(filePath, 'utf-8');
const parsed = this.xmlParser.parse(content);
if (!parsed.Container) {
return null;
}
const container = parsed.Container;
return {
filePath,
name: container.Name,
repository: container.Repository,
};
}
private matchContainerToTemplate(
container: DockerContainer,
templates: ParsedTemplate[]
): ParsedTemplate | null {
const containerName = this.normalizeContainerName(container.names[0]);
const containerImage = this.normalizeRepository(container.image);
for (const template of templates) {
if (template.name && this.normalizeContainerName(template.name) === containerName) {
return template;
}
}
for (const template of templates) {
if (
template.repository &&
this.normalizeRepository(template.repository) === containerImage
) {
return template;
}
}
return null;
}
private normalizeContainerName(name: string): string {
return name.replace(/^\//, '').toLowerCase();
}
private normalizeRepository(repository: string): string {
// Strip digest if present (e.g., image@sha256:abc123)
const [withoutDigest] = repository.split('@');
// Only remove tag if colon appears after last slash (i.e., it's a tag, not a port)
const lastColon = withoutDigest.lastIndexOf(':');
const lastSlash = withoutDigest.lastIndexOf('/');
const withoutTag = lastColon > lastSlash ? withoutDigest.slice(0, lastColon) : withoutDigest;
return withoutTag.toLowerCase();
}
private async updateMappings(mappings: Record<string, string | null>): Promise<void> {
const config = this.dockerConfigService.getConfig();
const updated = await this.dockerConfigService.validate({
...config,
templateMappings: mappings,
});
this.dockerConfigService.replaceConfig(updated);
}
}

View File

@@ -1,6 +1,16 @@
import { Field, ID, Int, ObjectType, registerEnumType } from '@nestjs/graphql';
import {
Field,
Float,
GraphQLISODateTime,
ID,
InputType,
Int,
ObjectType,
registerEnumType,
} from '@nestjs/graphql';
import { Node } from '@unraid/shared/graphql.model.js';
import { PrefixedID } from '@unraid/shared/prefixed-id-scalar.js';
import { GraphQLBigInt, GraphQLJSON, GraphQLPort } from 'graphql-scalars';
export enum ContainerPortType {
@@ -27,8 +37,54 @@ export class ContainerPort {
type!: ContainerPortType;
}
@ObjectType()
export class DockerPortConflictContainer {
@Field(() => PrefixedID)
id!: string;
@Field(() => String)
name!: string;
}
@ObjectType()
export class DockerContainerPortConflict {
@Field(() => GraphQLPort)
privatePort!: number;
@Field(() => ContainerPortType)
type!: ContainerPortType;
@Field(() => [DockerPortConflictContainer])
containers!: DockerPortConflictContainer[];
}
@ObjectType()
export class DockerLanPortConflict {
@Field(() => String)
lanIpPort!: string;
@Field(() => GraphQLPort, { nullable: true })
publicPort?: number;
@Field(() => ContainerPortType)
type!: ContainerPortType;
@Field(() => [DockerPortConflictContainer])
containers!: DockerPortConflictContainer[];
}
@ObjectType()
export class DockerPortConflicts {
@Field(() => [DockerContainerPortConflict])
containerPorts!: DockerContainerPortConflict[];
@Field(() => [DockerLanPortConflict])
lanPorts!: DockerLanPortConflict[];
}
export enum ContainerState {
RUNNING = 'RUNNING',
PAUSED = 'PAUSED',
EXITED = 'EXITED',
}
@@ -89,12 +145,30 @@ export class DockerContainer extends Node {
@Field(() => [ContainerPort])
ports!: ContainerPort[];
@Field(() => [String], {
nullable: true,
description: 'List of LAN-accessible host:port values',
})
lanIpPorts?: string[];
@Field(() => GraphQLBigInt, {
nullable: true,
description: 'Total size of all files in the container (in bytes)',
})
sizeRootFs?: number;
@Field(() => GraphQLBigInt, {
nullable: true,
description: 'Size of writable layer (in bytes)',
})
sizeRw?: number;
@Field(() => GraphQLBigInt, {
nullable: true,
description: 'Size of container logs (in bytes)',
})
sizeLog?: number;
@Field(() => GraphQLJSON, { nullable: true })
labels?: Record<string, any>;
@@ -115,6 +189,45 @@ export class DockerContainer extends Node {
@Field(() => Boolean)
autoStart!: boolean;
@Field(() => Int, { nullable: true, description: 'Zero-based order in the auto-start list' })
autoStartOrder?: number;
@Field(() => Int, { nullable: true, description: 'Wait time in seconds applied after start' })
autoStartWait?: number;
@Field(() => String, { nullable: true })
templatePath?: string;
@Field(() => String, { nullable: true, description: 'Project/Product homepage URL' })
projectUrl?: string;
@Field(() => String, { nullable: true, description: 'Registry/Docker Hub URL' })
registryUrl?: string;
@Field(() => String, { nullable: true, description: 'Support page/thread URL' })
supportUrl?: string;
@Field(() => String, { nullable: true, description: 'Icon URL' })
iconUrl?: string;
@Field(() => String, { nullable: true, description: 'Resolved WebUI URL from template' })
webUiUrl?: string;
@Field(() => String, {
nullable: true,
description: 'Shell to use for console access (from template)',
})
shell?: string;
@Field(() => [ContainerPort], {
nullable: true,
description: 'Port mappings from template (used when container is not running)',
})
templatePorts?: ContainerPort[];
@Field(() => Boolean, { description: 'Whether the container is orphaned (no template found)' })
isOrphaned!: boolean;
}
@ObjectType({ implements: () => Node })
@@ -162,6 +275,127 @@ export class DockerNetwork extends Node {
labels!: Record<string, any>;
}
@ObjectType()
export class DockerContainerLogLine {
@Field(() => GraphQLISODateTime)
timestamp!: Date;
@Field(() => String)
message!: string;
}
@ObjectType()
export class DockerContainerLogs {
@Field(() => PrefixedID)
containerId!: string;
@Field(() => [DockerContainerLogLine])
lines!: DockerContainerLogLine[];
@Field(() => GraphQLISODateTime, {
nullable: true,
description:
'Cursor that can be passed back through the since argument to continue streaming logs.',
})
cursor?: Date | null;
}
@ObjectType()
export class DockerContainerStats {
@Field(() => PrefixedID)
id!: string;
@Field(() => Float, { description: 'CPU Usage Percentage' })
cpuPercent!: number;
@Field(() => String, { description: 'Memory Usage String (e.g. 100MB / 1GB)' })
memUsage!: string;
@Field(() => Float, { description: 'Memory Usage Percentage' })
memPercent!: number;
@Field(() => String, { description: 'Network I/O String (e.g. 100MB / 1GB)' })
netIO!: string;
@Field(() => String, { description: 'Block I/O String (e.g. 100MB / 1GB)' })
blockIO!: string;
}
@ObjectType({ description: 'Tailscale exit node connection status' })
export class TailscaleExitNodeStatus {
@Field(() => Boolean, { description: 'Whether the exit node is online' })
online!: boolean;
@Field(() => [String], { nullable: true, description: 'Tailscale IPs of the exit node' })
tailscaleIps?: string[];
}
@ObjectType({ description: 'Tailscale status for a Docker container' })
export class TailscaleStatus {
@Field(() => Boolean, { description: 'Whether Tailscale is online in the container' })
online!: boolean;
@Field(() => String, { nullable: true, description: 'Current Tailscale version' })
version?: string;
@Field(() => String, { nullable: true, description: 'Latest available Tailscale version' })
latestVersion?: string;
@Field(() => Boolean, { description: 'Whether a Tailscale update is available' })
updateAvailable!: boolean;
@Field(() => String, { nullable: true, description: 'Configured Tailscale hostname' })
hostname?: string;
@Field(() => String, { nullable: true, description: 'Actual Tailscale DNS name' })
dnsName?: string;
@Field(() => String, { nullable: true, description: 'DERP relay code' })
relay?: string;
@Field(() => String, { nullable: true, description: 'DERP relay region name' })
relayName?: string;
@Field(() => [String], { nullable: true, description: 'Tailscale IPv4 and IPv6 addresses' })
tailscaleIps?: string[];
@Field(() => [String], { nullable: true, description: 'Advertised subnet routes' })
primaryRoutes?: string[];
@Field(() => Boolean, { description: 'Whether this container is an exit node' })
isExitNode!: boolean;
@Field(() => TailscaleExitNodeStatus, {
nullable: true,
description: 'Status of the connected exit node (if using one)',
})
exitNodeStatus?: TailscaleExitNodeStatus;
@Field(() => String, { nullable: true, description: 'Tailscale Serve/Funnel WebUI URL' })
webUiUrl?: string;
@Field(() => GraphQLISODateTime, { nullable: true, description: 'Tailscale key expiry date' })
keyExpiry?: Date;
@Field(() => Int, { nullable: true, description: 'Days until key expires' })
keyExpiryDays?: number;
@Field(() => Boolean, { description: 'Whether the Tailscale key has expired' })
keyExpired!: boolean;
@Field(() => String, {
nullable: true,
description: 'Tailscale backend state (Running, NeedsLogin, Stopped, etc.)',
})
backendState?: string;
@Field(() => String, {
nullable: true,
description: 'Authentication URL if Tailscale needs login',
})
authUrl?: string;
}
@ObjectType({
implements: () => Node,
})
@@ -171,4 +405,28 @@ export class Docker extends Node {
@Field(() => [DockerNetwork])
networks!: DockerNetwork[];
@Field(() => DockerPortConflicts)
portConflicts!: DockerPortConflicts;
@Field(() => DockerContainerLogs, {
description:
'Access container logs. Requires specifying a target container id through resolver arguments.',
})
logs!: DockerContainerLogs;
}
@InputType()
export class DockerAutostartEntryInput {
@Field(() => PrefixedID, { description: 'Docker container identifier' })
id!: string;
@Field(() => Boolean, { description: 'Whether the container should auto-start' })
autoStart!: boolean;
@Field(() => Int, {
nullable: true,
description: 'Number of seconds to wait after starting the container',
})
wait?: number | null;
}

View File

@@ -1,21 +1,28 @@
import { CacheModule } from '@nestjs/cache-manager';
import { Test, TestingModule } from '@nestjs/testing';
import { describe, expect, it, vi } from 'vitest';
import { DockerConfigService } from '@app/unraid-api/graph/resolvers/docker/docker-config.service.js';
import { DockerEventService } from '@app/unraid-api/graph/resolvers/docker/docker-event.service.js';
import { DockerLogService } from '@app/unraid-api/graph/resolvers/docker/docker-log.service.js';
import { DockerNetworkService } from '@app/unraid-api/graph/resolvers/docker/docker-network.service.js';
import { DockerPhpService } from '@app/unraid-api/graph/resolvers/docker/docker-php.service.js';
import { DockerPortService } from '@app/unraid-api/graph/resolvers/docker/docker-port.service.js';
import { DockerStatsService } from '@app/unraid-api/graph/resolvers/docker/docker-stats.service.js';
import { DockerTemplateScannerService } from '@app/unraid-api/graph/resolvers/docker/docker-template-scanner.service.js';
import { DockerModule } from '@app/unraid-api/graph/resolvers/docker/docker.module.js';
import { DockerMutationsResolver } from '@app/unraid-api/graph/resolvers/docker/docker.mutations.resolver.js';
import { DockerResolver } from '@app/unraid-api/graph/resolvers/docker/docker.resolver.js';
import { DockerService } from '@app/unraid-api/graph/resolvers/docker/docker.service.js';
import { DockerOrganizerConfigService } from '@app/unraid-api/graph/resolvers/docker/organizer/docker-organizer-config.service.js';
import { DockerOrganizerService } from '@app/unraid-api/graph/resolvers/docker/organizer/docker-organizer.service.js';
import { SubscriptionHelperService } from '@app/unraid-api/graph/services/subscription-helper.service.js';
import { SubscriptionTrackerService } from '@app/unraid-api/graph/services/subscription-tracker.service.js';
describe('DockerModule', () => {
it('should compile the module', async () => {
const module = await Test.createTestingModule({
imports: [DockerModule],
imports: [CacheModule.register({ isGlobal: true }), DockerModule],
})
.overrideProvider(DockerService)
.useValue({ getDockerClient: vi.fn() })
@@ -23,6 +30,22 @@ describe('DockerModule', () => {
.useValue({ getConfig: vi.fn() })
.overrideProvider(DockerConfigService)
.useValue({ getConfig: vi.fn() })
.overrideProvider(DockerLogService)
.useValue({})
.overrideProvider(DockerNetworkService)
.useValue({})
.overrideProvider(DockerPortService)
.useValue({})
.overrideProvider(SubscriptionTrackerService)
.useValue({
registerTopic: vi.fn(),
subscribe: vi.fn(),
unsubscribe: vi.fn(),
})
.overrideProvider(SubscriptionHelperService)
.useValue({
createTrackedSubscription: vi.fn(),
})
.compile();
expect(module).toBeDefined();
@@ -46,25 +69,52 @@ describe('DockerModule', () => {
expect(service).toHaveProperty('getDockerClient');
});
it('should provide DockerEventService', async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [
DockerEventService,
{ provide: DockerService, useValue: { getDockerClient: vi.fn() } },
],
}).compile();
const service = module.get<DockerEventService>(DockerEventService);
expect(service).toBeInstanceOf(DockerEventService);
});
it('should provide DockerResolver', async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [
DockerResolver,
{ provide: DockerService, useValue: {} },
{ provide: DockerService, useValue: { clearContainerCache: vi.fn() } },
{
provide: DockerConfigService,
useValue: {
defaultConfig: vi
.fn()
.mockReturnValue({ templateMappings: {}, skipTemplatePaths: [] }),
getConfig: vi
.fn()
.mockReturnValue({ templateMappings: {}, skipTemplatePaths: [] }),
validate: vi.fn().mockImplementation((config) => Promise.resolve(config)),
replaceConfig: vi.fn(),
},
},
{ provide: DockerOrganizerService, useValue: {} },
{ provide: DockerPhpService, useValue: { getContainerUpdateStatuses: vi.fn() } },
{
provide: DockerTemplateScannerService,
useValue: {
scanTemplates: vi.fn(),
syncMissingContainers: vi.fn(),
},
},
{
provide: DockerStatsService,
useValue: {
startStatsStream: vi.fn(),
stopStatsStream: vi.fn(),
},
},
{
provide: SubscriptionTrackerService,
useValue: {
registerTopic: vi.fn(),
},
},
{
provide: SubscriptionHelperService,
useValue: {
createTrackedSubscription: vi.fn(),
},
},
],
}).compile();

View File

@@ -2,27 +2,44 @@ import { Module } from '@nestjs/common';
import { JobModule } from '@app/unraid-api/cron/job.module.js';
import { ContainerStatusJob } from '@app/unraid-api/graph/resolvers/docker/container-status.job.js';
import { DockerAutostartService } from '@app/unraid-api/graph/resolvers/docker/docker-autostart.service.js';
import { DockerConfigService } from '@app/unraid-api/graph/resolvers/docker/docker-config.service.js';
import { DockerContainerResolver } from '@app/unraid-api/graph/resolvers/docker/docker-container.resolver.js';
import { DockerLogService } from '@app/unraid-api/graph/resolvers/docker/docker-log.service.js';
import { DockerManifestService } from '@app/unraid-api/graph/resolvers/docker/docker-manifest.service.js';
import { DockerNetworkService } from '@app/unraid-api/graph/resolvers/docker/docker-network.service.js';
import { DockerPhpService } from '@app/unraid-api/graph/resolvers/docker/docker-php.service.js';
import { DockerPortService } from '@app/unraid-api/graph/resolvers/docker/docker-port.service.js';
import { DockerStatsService } from '@app/unraid-api/graph/resolvers/docker/docker-stats.service.js';
import { DockerTailscaleService } from '@app/unraid-api/graph/resolvers/docker/docker-tailscale.service.js';
import { DockerTemplateIconService } from '@app/unraid-api/graph/resolvers/docker/docker-template-icon.service.js';
import { DockerTemplateScannerService } from '@app/unraid-api/graph/resolvers/docker/docker-template-scanner.service.js';
import { DockerMutationsResolver } from '@app/unraid-api/graph/resolvers/docker/docker.mutations.resolver.js';
import { DockerResolver } from '@app/unraid-api/graph/resolvers/docker/docker.resolver.js';
import { DockerService } from '@app/unraid-api/graph/resolvers/docker/docker.service.js';
import { DockerOrganizerConfigService } from '@app/unraid-api/graph/resolvers/docker/organizer/docker-organizer-config.service.js';
import { DockerOrganizerService } from '@app/unraid-api/graph/resolvers/docker/organizer/docker-organizer.service.js';
import { NotificationsModule } from '@app/unraid-api/graph/resolvers/notifications/notifications.module.js';
import { ServicesModule } from '@app/unraid-api/graph/services/services.module.js';
@Module({
imports: [JobModule],
imports: [JobModule, NotificationsModule, ServicesModule],
providers: [
// Services
DockerService,
DockerAutostartService,
DockerOrganizerConfigService,
DockerOrganizerService,
DockerManifestService,
DockerPhpService,
DockerConfigService,
// DockerEventService,
DockerTemplateScannerService,
DockerTemplateIconService,
DockerStatsService,
DockerTailscaleService,
DockerLogService,
DockerNetworkService,
DockerPortService,
// Jobs
ContainerStatusJob,

View File

@@ -45,6 +45,7 @@ describe('DockerMutationsResolver', () => {
state: ContainerState.RUNNING,
status: 'Up 2 hours',
names: ['test-container'],
isOrphaned: false,
};
vi.mocked(dockerService.start).mockResolvedValue(mockContainer);
@@ -65,6 +66,7 @@ describe('DockerMutationsResolver', () => {
state: ContainerState.EXITED,
status: 'Exited',
names: ['test-container'],
isOrphaned: false,
};
vi.mocked(dockerService.stop).mockResolvedValue(mockContainer);

View File

@@ -4,7 +4,11 @@ import { AuthAction, Resource } from '@unraid/shared/graphql.model.js';
import { PrefixedID } from '@unraid/shared/prefixed-id-scalar.js';
import { UsePermissions } from '@unraid/shared/use-permissions.directive.js';
import { DockerContainer } from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { UseFeatureFlag } from '@app/unraid-api/decorators/use-feature-flag.decorator.js';
import {
DockerAutostartEntryInput,
DockerContainer,
} from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { DockerService } from '@app/unraid-api/graph/resolvers/docker/docker.service.js';
import { DockerMutations } from '@app/unraid-api/graph/resolvers/mutation/mutation.model.js';
@@ -32,4 +36,86 @@ export class DockerMutationsResolver {
public async stop(@Args('id', { type: () => PrefixedID }) id: string) {
return this.dockerService.stop(id);
}
@ResolveField(() => DockerContainer, { description: 'Pause (Suspend) a container' })
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
public async pause(@Args('id', { type: () => PrefixedID }) id: string) {
return this.dockerService.pause(id);
}
@ResolveField(() => DockerContainer, { description: 'Unpause (Resume) a container' })
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
public async unpause(@Args('id', { type: () => PrefixedID }) id: string) {
return this.dockerService.unpause(id);
}
@ResolveField(() => Boolean, { description: 'Remove a container' })
@UsePermissions({
action: AuthAction.DELETE_ANY,
resource: Resource.DOCKER,
})
public async removeContainer(
@Args('id', { type: () => PrefixedID }) id: string,
@Args('withImage', { type: () => Boolean, nullable: true }) withImage?: boolean
) {
return this.dockerService.removeContainer(id, { withImage });
}
@ResolveField(() => Boolean, {
description: 'Update auto-start configuration for Docker containers',
})
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
public async updateAutostartConfiguration(
@Args('entries', { type: () => [DockerAutostartEntryInput] })
entries: DockerAutostartEntryInput[],
@Args('persistUserPreferences', { type: () => Boolean, nullable: true })
persistUserPreferences?: boolean
) {
await this.dockerService.updateAutostartConfiguration(entries, {
persistUserPreferences,
});
return true;
}
@ResolveField(() => DockerContainer, { description: 'Update a container to the latest image' })
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
public async updateContainer(@Args('id', { type: () => PrefixedID }) id: string) {
return this.dockerService.updateContainer(id);
}
@ResolveField(() => [DockerContainer], {
description: 'Update multiple containers to the latest images',
})
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
public async updateContainers(
@Args('ids', { type: () => [PrefixedID] })
ids: string[]
) {
return this.dockerService.updateContainers(ids);
}
@ResolveField(() => [DockerContainer], {
description: 'Update all containers that have available updates',
})
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
public async updateAllContainers() {
return this.dockerService.updateAllContainers();
}
}

View File

@@ -3,11 +3,20 @@ import { Test } from '@nestjs/testing';
import { beforeEach, describe, expect, it, vi } from 'vitest';
import { DockerConfigService } from '@app/unraid-api/graph/resolvers/docker/docker-config.service.js';
import { DockerPhpService } from '@app/unraid-api/graph/resolvers/docker/docker-php.service.js';
import { ContainerState, DockerContainer } from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { DockerStatsService } from '@app/unraid-api/graph/resolvers/docker/docker-stats.service.js';
import { DockerTemplateScannerService } from '@app/unraid-api/graph/resolvers/docker/docker-template-scanner.service.js';
import {
ContainerState,
DockerContainer,
DockerContainerLogs,
} from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { DockerResolver } from '@app/unraid-api/graph/resolvers/docker/docker.resolver.js';
import { DockerService } from '@app/unraid-api/graph/resolvers/docker/docker.service.js';
import { DockerOrganizerService } from '@app/unraid-api/graph/resolvers/docker/organizer/docker-organizer.service.js';
import { SubscriptionHelperService } from '@app/unraid-api/graph/services/subscription-helper.service.js';
import { SubscriptionTrackerService } from '@app/unraid-api/graph/services/subscription-tracker.service.js';
import { GraphQLFieldHelper } from '@app/unraid-api/utils/graphql-field-helper.js';
vi.mock('@app/unraid-api/utils/graphql-field-helper.js', () => ({
@@ -29,6 +38,22 @@ describe('DockerResolver', () => {
useValue: {
getContainers: vi.fn(),
getNetworks: vi.fn(),
getContainerLogSizes: vi.fn(),
getContainerLogs: vi.fn(),
clearContainerCache: vi.fn(),
},
},
{
provide: DockerConfigService,
useValue: {
defaultConfig: vi
.fn()
.mockReturnValue({ templateMappings: {}, skipTemplatePaths: [] }),
getConfig: vi
.fn()
.mockReturnValue({ templateMappings: {}, skipTemplatePaths: [] }),
validate: vi.fn().mockImplementation((config) => Promise.resolve(config)),
replaceConfig: vi.fn(),
},
},
{
@@ -43,6 +68,39 @@ describe('DockerResolver', () => {
getContainerUpdateStatuses: vi.fn(),
},
},
{
provide: DockerTemplateScannerService,
useValue: {
scanTemplates: vi.fn().mockResolvedValue({
scanned: 0,
matched: 0,
skipped: 0,
errors: [],
}),
syncMissingContainers: vi.fn().mockResolvedValue(false),
},
},
{
provide: DockerStatsService,
useValue: {
startStatsStream: vi.fn(),
stopStatsStream: vi.fn(),
},
},
{
provide: SubscriptionTrackerService,
useValue: {
registerTopic: vi.fn(),
subscribe: vi.fn(),
unsubscribe: vi.fn(),
},
},
{
provide: SubscriptionHelperService,
useValue: {
createTrackedSubscription: vi.fn(),
},
},
],
}).compile();
@@ -51,6 +109,8 @@ describe('DockerResolver', () => {
// Reset mocks before each test
vi.clearAllMocks();
vi.mocked(GraphQLFieldHelper.isFieldRequested).mockImplementation(() => false);
vi.mocked(dockerService.getContainerLogSizes).mockResolvedValue(new Map());
});
it('should be defined', () => {
@@ -75,6 +135,7 @@ describe('DockerResolver', () => {
ports: [],
state: ContainerState.EXITED,
status: 'Exited',
isOrphaned: false,
},
{
id: '2',
@@ -87,16 +148,19 @@ describe('DockerResolver', () => {
ports: [],
state: ContainerState.RUNNING,
status: 'Up 2 hours',
isOrphaned: false,
},
];
vi.mocked(dockerService.getContainers).mockResolvedValue(mockContainers);
vi.mocked(GraphQLFieldHelper.isFieldRequested).mockReturnValue(false);
vi.mocked(GraphQLFieldHelper.isFieldRequested).mockImplementation(() => false);
const mockInfo = {} as any;
const result = await resolver.containers(false, mockInfo);
expect(result).toEqual(mockContainers);
expect(GraphQLFieldHelper.isFieldRequested).toHaveBeenCalledWith(mockInfo, 'sizeRootFs');
expect(GraphQLFieldHelper.isFieldRequested).toHaveBeenCalledWith(mockInfo, 'sizeRw');
expect(GraphQLFieldHelper.isFieldRequested).toHaveBeenCalledWith(mockInfo, 'sizeLog');
expect(dockerService.getContainers).toHaveBeenCalledWith({ skipCache: false, size: false });
});
@@ -114,10 +178,13 @@ describe('DockerResolver', () => {
sizeRootFs: 1024000,
state: ContainerState.EXITED,
status: 'Exited',
isOrphaned: false,
},
];
vi.mocked(dockerService.getContainers).mockResolvedValue(mockContainers);
vi.mocked(GraphQLFieldHelper.isFieldRequested).mockReturnValue(true);
vi.mocked(GraphQLFieldHelper.isFieldRequested).mockImplementation((_, field) => {
return field === 'sizeRootFs';
});
const mockInfo = {} as any;
@@ -127,10 +194,61 @@ describe('DockerResolver', () => {
expect(dockerService.getContainers).toHaveBeenCalledWith({ skipCache: false, size: true });
});
it('should request size when sizeRw field is requested', async () => {
const mockContainers: DockerContainer[] = [];
vi.mocked(dockerService.getContainers).mockResolvedValue(mockContainers);
vi.mocked(GraphQLFieldHelper.isFieldRequested).mockImplementation((_, field) => {
return field === 'sizeRw';
});
const mockInfo = {} as any;
await resolver.containers(false, mockInfo);
expect(GraphQLFieldHelper.isFieldRequested).toHaveBeenCalledWith(mockInfo, 'sizeRw');
expect(dockerService.getContainers).toHaveBeenCalledWith({ skipCache: false, size: true });
});
it('should fetch log sizes when sizeLog field is requested', async () => {
const mockContainers: DockerContainer[] = [
{
id: '1',
autoStart: false,
command: 'test',
names: ['/test-container'],
created: 1234567890,
image: 'test-image',
imageId: 'test-image-id',
ports: [],
state: ContainerState.EXITED,
status: 'Exited',
isOrphaned: false,
},
];
vi.mocked(dockerService.getContainers).mockResolvedValue(mockContainers);
vi.mocked(GraphQLFieldHelper.isFieldRequested).mockImplementation((_, field) => {
if (field === 'sizeLog') return true;
return false;
});
const logSizeMap = new Map<string, number>([['test-container', 42]]);
vi.mocked(dockerService.getContainerLogSizes).mockResolvedValue(logSizeMap);
const mockInfo = {} as any;
const result = await resolver.containers(false, mockInfo);
expect(GraphQLFieldHelper.isFieldRequested).toHaveBeenCalledWith(mockInfo, 'sizeLog');
expect(dockerService.getContainerLogSizes).toHaveBeenCalledWith(['test-container']);
expect(result[0]?.sizeLog).toBe(42);
expect(dockerService.getContainers).toHaveBeenCalledWith({ skipCache: false, size: false });
});
it('should request size when GraphQLFieldHelper indicates sizeRootFs is requested', async () => {
const mockContainers: DockerContainer[] = [];
vi.mocked(dockerService.getContainers).mockResolvedValue(mockContainers);
vi.mocked(GraphQLFieldHelper.isFieldRequested).mockReturnValue(true);
vi.mocked(GraphQLFieldHelper.isFieldRequested).mockImplementation((_, field) => {
return field === 'sizeRootFs';
});
const mockInfo = {} as any;
@@ -142,7 +260,7 @@ describe('DockerResolver', () => {
it('should not request size when GraphQLFieldHelper indicates sizeRootFs is not requested', async () => {
const mockContainers: DockerContainer[] = [];
vi.mocked(dockerService.getContainers).mockResolvedValue(mockContainers);
vi.mocked(GraphQLFieldHelper.isFieldRequested).mockReturnValue(false);
vi.mocked(GraphQLFieldHelper.isFieldRequested).mockImplementation(() => false);
const mockInfo = {} as any;
@@ -161,4 +279,22 @@ describe('DockerResolver', () => {
await resolver.containers(true, mockInfo);
expect(dockerService.getContainers).toHaveBeenCalledWith({ skipCache: true, size: false });
});
it('should fetch container logs with provided arguments', async () => {
const since = new Date('2024-01-01T00:00:00.000Z');
const logResult: DockerContainerLogs = {
containerId: '1',
lines: [],
cursor: since,
};
vi.mocked(dockerService.getContainerLogs).mockResolvedValue(logResult);
const result = await resolver.logs('1', since, 25);
expect(result).toEqual(logResult);
expect(dockerService.getContainerLogs).toHaveBeenCalledWith('1', {
since,
tail: 25,
});
});
});

View File

@@ -1,19 +1,41 @@
import { Args, Info, Mutation, Query, ResolveField, Resolver } from '@nestjs/graphql';
import {
Args,
GraphQLISODateTime,
Info,
Int,
Mutation,
Query,
ResolveField,
Resolver,
Subscription,
} from '@nestjs/graphql';
import type { GraphQLResolveInfo } from 'graphql';
import { AuthAction, Resource } from '@unraid/shared/graphql.model.js';
import { PrefixedID } from '@unraid/shared/prefixed-id-scalar.js';
import { UsePermissions } from '@unraid/shared/use-permissions.directive.js';
import { GraphQLJSON } from 'graphql-scalars';
import { PUBSUB_CHANNEL } from '@app/core/pubsub.js';
import { UseFeatureFlag } from '@app/unraid-api/decorators/use-feature-flag.decorator.js';
import { DockerConfigService } from '@app/unraid-api/graph/resolvers/docker/docker-config.service.js';
import { DockerPhpService } from '@app/unraid-api/graph/resolvers/docker/docker-php.service.js';
import { DockerStatsService } from '@app/unraid-api/graph/resolvers/docker/docker-stats.service.js';
import { DockerTemplateSyncResult } from '@app/unraid-api/graph/resolvers/docker/docker-template-scanner.model.js';
import { DockerTemplateScannerService } from '@app/unraid-api/graph/resolvers/docker/docker-template-scanner.service.js';
import { ExplicitStatusItem } from '@app/unraid-api/graph/resolvers/docker/docker-update-status.model.js';
import {
Docker,
DockerContainer,
DockerContainerLogs,
DockerContainerStats,
DockerNetwork,
DockerPortConflicts,
} from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { DockerService } from '@app/unraid-api/graph/resolvers/docker/docker.service.js';
import { DockerOrganizerService } from '@app/unraid-api/graph/resolvers/docker/organizer/docker-organizer.service.js';
import { SubscriptionHelperService } from '@app/unraid-api/graph/services/subscription-helper.service.js';
import { SubscriptionTrackerService } from '@app/unraid-api/graph/services/subscription-tracker.service.js';
import { DEFAULT_ORGANIZER_ROOT_ID } from '@app/unraid-api/organizer/organizer.js';
import { ResolvedOrganizerV1 } from '@app/unraid-api/organizer/organizer.model.js';
import { GraphQLFieldHelper } from '@app/unraid-api/utils/graphql-field-helper.js';
@@ -22,9 +44,20 @@ import { GraphQLFieldHelper } from '@app/unraid-api/utils/graphql-field-helper.j
export class DockerResolver {
constructor(
private readonly dockerService: DockerService,
private readonly dockerConfigService: DockerConfigService,
private readonly dockerOrganizerService: DockerOrganizerService,
private readonly dockerPhpService: DockerPhpService
) {}
private readonly dockerPhpService: DockerPhpService,
private readonly dockerTemplateScannerService: DockerTemplateScannerService,
private readonly dockerStatsService: DockerStatsService,
private readonly subscriptionTracker: SubscriptionTrackerService,
private readonly subscriptionHelper: SubscriptionHelperService
) {
this.subscriptionTracker.registerTopic(
PUBSUB_CHANNEL.DOCKER_STATS,
() => this.dockerStatsService.startStatsStream(),
() => this.dockerStatsService.stopStatsStream()
);
}
@UsePermissions({
action: AuthAction.READ_ANY,
@@ -37,6 +70,17 @@ export class DockerResolver {
};
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => DockerContainer, { nullable: true })
public async container(@Args('id', { type: () => PrefixedID }) id: string) {
const containers = await this.dockerService.getContainers({ skipCache: false });
return containers.find((c) => c.id === id) ?? null;
}
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
@@ -46,8 +90,47 @@ export class DockerResolver {
@Args('skipCache', { defaultValue: false, type: () => Boolean }) skipCache: boolean,
@Info() info: GraphQLResolveInfo
) {
const requestsSize = GraphQLFieldHelper.isFieldRequested(info, 'sizeRootFs');
return this.dockerService.getContainers({ skipCache, size: requestsSize });
const requestsRootFsSize = GraphQLFieldHelper.isFieldRequested(info, 'sizeRootFs');
const requestsRwSize = GraphQLFieldHelper.isFieldRequested(info, 'sizeRw');
const requestsLogSize = GraphQLFieldHelper.isFieldRequested(info, 'sizeLog');
const containers = await this.dockerService.getContainers({
skipCache,
size: requestsRootFsSize || requestsRwSize,
});
if (requestsLogSize) {
const names = Array.from(
new Set(
containers
.map((container) => container.names?.[0]?.replace(/^\//, '') || null)
.filter((name): name is string => Boolean(name))
)
);
const logSizes = await this.dockerService.getContainerLogSizes(names);
containers.forEach((container) => {
const normalized = container.names?.[0]?.replace(/^\//, '') || '';
container.sizeLog = normalized ? (logSizes.get(normalized) ?? 0) : 0;
});
}
const wasSynced = await this.dockerTemplateScannerService.syncMissingContainers(containers);
return wasSynced ? await this.dockerService.getContainers({ skipCache: true }) : containers;
}
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => DockerContainerLogs)
public async logs(
@Args('id', { type: () => PrefixedID }) id: string,
@Args('since', { type: () => GraphQLISODateTime, nullable: true }) since?: Date | null,
@Args('tail', { type: () => Int, nullable: true }) tail?: number | null
) {
return this.dockerService.getContainerLogs(id, {
since: since ?? undefined,
tail,
});
}
@UsePermissions({
@@ -61,14 +144,27 @@ export class DockerResolver {
return this.dockerService.getNetworks({ skipCache });
}
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => DockerPortConflicts)
public async portConflicts(
@Args('skipCache', { defaultValue: false, type: () => Boolean }) skipCache: boolean
) {
return this.dockerService.getPortConflicts({ skipCache });
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@ResolveField(() => ResolvedOrganizerV1)
public async organizer() {
return this.dockerOrganizerService.resolveOrganizer();
public async organizer(
@Args('skipCache', { defaultValue: false, type: () => Boolean }) skipCache: boolean
) {
return this.dockerOrganizerService.resolveOrganizer(undefined, { skipCache });
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@@ -107,6 +203,11 @@ export class DockerResolver {
return this.dockerOrganizerService.resolveOrganizer(organizer);
}
/**
* Deletes organizer entries (folders). When a folder is deleted, its container
* children are automatically appended to the end of the root folder via
* `addMissingResourcesToView`. Containers are never permanently deleted by this operation.
*/
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.UPDATE_ANY,
@@ -137,6 +238,80 @@ export class DockerResolver {
return this.dockerOrganizerService.resolveOrganizer(organizer);
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
@Mutation(() => ResolvedOrganizerV1)
public async moveDockerItemsToPosition(
@Args('sourceEntryIds', { type: () => [String] }) sourceEntryIds: string[],
@Args('destinationFolderId') destinationFolderId: string,
@Args('position', { type: () => Number }) position: number
) {
const organizer = await this.dockerOrganizerService.moveItemsToPosition({
sourceEntryIds,
destinationFolderId,
position,
});
return this.dockerOrganizerService.resolveOrganizer(organizer);
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
@Mutation(() => ResolvedOrganizerV1)
public async renameDockerFolder(
@Args('folderId') folderId: string,
@Args('newName') newName: string
) {
const organizer = await this.dockerOrganizerService.renameFolderById({
folderId,
newName,
});
return this.dockerOrganizerService.resolveOrganizer(organizer);
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
@Mutation(() => ResolvedOrganizerV1)
public async createDockerFolderWithItems(
@Args('name') name: string,
@Args('parentId', { nullable: true }) parentId?: string,
@Args('sourceEntryIds', { type: () => [String], nullable: true }) sourceEntryIds?: string[],
@Args('position', { type: () => Number, nullable: true }) position?: number
) {
const organizer = await this.dockerOrganizerService.createFolderWithItems({
name,
parentId: parentId ?? DEFAULT_ORGANIZER_ROOT_ID,
sourceEntryIds: sourceEntryIds ?? [],
position,
});
return this.dockerOrganizerService.resolveOrganizer(organizer);
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
@Mutation(() => ResolvedOrganizerV1)
public async updateDockerViewPreferences(
@Args('viewId', { nullable: true, defaultValue: 'default' }) viewId: string,
@Args('prefs', { type: () => GraphQLJSON }) prefs: Record<string, unknown>
) {
const organizer = await this.dockerOrganizerService.updateViewPreferences({
viewId,
prefs,
});
return this.dockerOrganizerService.resolveOrganizer(organizer);
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.READ_ANY,
@@ -146,4 +321,48 @@ export class DockerResolver {
public async containerUpdateStatuses() {
return this.dockerPhpService.getContainerUpdateStatuses();
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
@Mutation(() => DockerTemplateSyncResult)
public async syncDockerTemplatePaths() {
return this.dockerTemplateScannerService.scanTemplates();
}
@UseFeatureFlag('ENABLE_NEXT_DOCKER_RELEASE')
@UsePermissions({
action: AuthAction.UPDATE_ANY,
resource: Resource.DOCKER,
})
@Mutation(() => Boolean, {
description:
'Reset Docker template mappings to defaults. Use this to recover from corrupted state.',
})
public async resetDockerTemplateMappings(): Promise<boolean> {
const defaultConfig = this.dockerConfigService.defaultConfig();
const currentConfig = this.dockerConfigService.getConfig();
const resetConfig = {
...currentConfig,
templateMappings: defaultConfig.templateMappings,
skipTemplatePaths: defaultConfig.skipTemplatePaths,
};
const validated = await this.dockerConfigService.validate(resetConfig);
this.dockerConfigService.replaceConfig(validated);
await this.dockerService.clearContainerCache();
return true;
}
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.DOCKER,
})
@Subscription(() => DockerContainerStats, {
resolve: (payload) => payload.dockerContainerStats,
})
public dockerContainerStats() {
return this.subscriptionHelper.createTrackedSubscription(PUBSUB_CHANNEL.DOCKER_STATS);
}
}

View File

@@ -0,0 +1,169 @@
import { CACHE_MANAGER } from '@nestjs/cache-manager';
import { Test, TestingModule } from '@nestjs/testing';
import { mkdtemp, readFile, rm } from 'fs/promises';
import { tmpdir } from 'os';
import { join } from 'path';
import { afterAll, beforeAll, describe, expect, it, vi } from 'vitest';
import { DockerAutostartService } from '@app/unraid-api/graph/resolvers/docker/docker-autostart.service.js';
import { DockerConfigService } from '@app/unraid-api/graph/resolvers/docker/docker-config.service.js';
import { DockerLogService } from '@app/unraid-api/graph/resolvers/docker/docker-log.service.js';
import { DockerManifestService } from '@app/unraid-api/graph/resolvers/docker/docker-manifest.service.js';
import { DockerNetworkService } from '@app/unraid-api/graph/resolvers/docker/docker-network.service.js';
import { DockerPortService } from '@app/unraid-api/graph/resolvers/docker/docker-port.service.js';
import { DockerService } from '@app/unraid-api/graph/resolvers/docker/docker.service.js';
import { NotificationsService } from '@app/unraid-api/graph/resolvers/notifications/notifications.service.js';
// Mock dependencies that are not focus of integration
const mockNotificationsService = {
notifyIfUnique: vi.fn(),
};
const mockDockerConfigService = {
getConfig: vi.fn().mockReturnValue({ templateMappings: {} }),
};
const mockDockerManifestService = {
getCachedUpdateStatuses: vi.fn().mockResolvedValue({}),
isUpdateAvailableCached: vi.fn().mockResolvedValue(false),
};
const mockCacheManager = {
get: vi.fn(),
set: vi.fn(),
del: vi.fn(),
};
// Hoisted mock for paths
const { mockPaths } = vi.hoisted(() => ({
mockPaths: {
'docker-autostart': '',
'docker-userprefs': '',
'docker-socket': '/var/run/docker.sock',
},
}));
vi.mock('@app/store/index.js', () => ({
getters: {
paths: () => mockPaths,
emhttp: () => ({ networks: [] }),
},
}));
// Check for Docker availability
let dockerAvailable = false;
try {
const Docker = (await import('dockerode')).default;
const docker = new Docker({ socketPath: '/var/run/docker.sock' });
await docker.ping();
dockerAvailable = true;
} catch {
console.warn('Docker not available or not accessible at /var/run/docker.sock');
}
describe.runIf(dockerAvailable)('DockerService Integration', () => {
let service: DockerService;
let autostartService: DockerAutostartService;
let module: TestingModule;
let tempDir: string;
beforeAll(async () => {
// Setup temp dir for config files
tempDir = await mkdtemp(join(tmpdir(), 'unraid-api-docker-test-'));
mockPaths['docker-autostart'] = join(tempDir, 'docker-autostart');
mockPaths['docker-userprefs'] = join(tempDir, 'docker-userprefs');
module = await Test.createTestingModule({
providers: [
DockerService,
DockerAutostartService,
DockerLogService,
DockerNetworkService,
DockerPortService,
{ provide: CACHE_MANAGER, useValue: mockCacheManager },
{ provide: DockerConfigService, useValue: mockDockerConfigService },
{ provide: DockerManifestService, useValue: mockDockerManifestService },
{ provide: NotificationsService, useValue: mockNotificationsService },
],
}).compile();
service = module.get<DockerService>(DockerService);
autostartService = module.get<DockerAutostartService>(DockerAutostartService);
});
afterAll(async () => {
if (tempDir) {
await rm(tempDir, { recursive: true, force: true });
}
});
it('should fetch containers from docker daemon', async () => {
const containers = await service.getContainers({ skipCache: true });
expect(Array.isArray(containers)).toBe(true);
if (containers.length > 0) {
expect(containers[0]).toHaveProperty('id');
expect(containers[0]).toHaveProperty('names');
expect(containers[0].state).toBeDefined();
}
});
it('should fetch networks from docker daemon', async () => {
const networks = await service.getNetworks({ skipCache: true });
expect(Array.isArray(networks)).toBe(true);
// Default networks (bridge, host, null) should always exist
expect(networks.length).toBeGreaterThan(0);
const bridge = networks.find((n) => n.name === 'bridge');
expect(bridge).toBeDefined();
});
it('should manage autostart configuration in temp files', async () => {
const containers = await service.getContainers({ skipCache: true });
if (containers.length === 0) {
console.warn('No containers found, skipping autostart write test');
return;
}
const target = containers[0];
// Ensure name is valid for autostart file (strip /)
const primaryName = autostartService.getContainerPrimaryName(target as any);
expect(primaryName).toBeTruthy();
const entry = {
id: target.id,
autoStart: true,
wait: 10,
};
await service.updateAutostartConfiguration([entry], { persistUserPreferences: true });
// Verify file content
try {
const content = await readFile(mockPaths['docker-autostart'], 'utf8');
expect(content).toContain(primaryName);
expect(content).toContain('10');
} catch (error: any) {
// If file doesn't exist, it might be because logic didn't write anything (e.g. name issue)
// But we expect it to write if container exists and we passed valid entry
throw new Error(`Failed to read autostart file: ${error.message}`);
}
});
it('should get container logs using dockerode', async () => {
const containers = await service.getContainers({ skipCache: true });
const running = containers.find((c) => c.state === 'RUNNING'); // Enum value is string 'RUNNING'
if (!running) {
console.warn('No running containers found, skipping log test');
return;
}
// This test verifies that the execa -> dockerode switch works for logs
// If it fails, it likely means the log parsing or dockerode interaction is wrong.
const logs = await service.getContainerLogs(running.id, { tail: 10 });
expect(logs).toBeDefined();
expect(logs.containerId).toBe(running.id);
expect(Array.isArray(logs.lines)).toBe(true);
// We can't guarantee lines length > 0 if container is silent, but it shouldn't throw.
});
});

View File

@@ -7,8 +7,19 @@ import { beforeEach, describe, expect, it, vi } from 'vitest';
// Import the mocked pubsub parts
import { pubsub, PUBSUB_CHANNEL } from '@app/core/pubsub.js';
import { ContainerState, DockerContainer } from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { DockerAutostartService } from '@app/unraid-api/graph/resolvers/docker/docker-autostart.service.js';
import { DockerConfigService } from '@app/unraid-api/graph/resolvers/docker/docker-config.service.js';
import { DockerLogService } from '@app/unraid-api/graph/resolvers/docker/docker-log.service.js';
import { DockerManifestService } from '@app/unraid-api/graph/resolvers/docker/docker-manifest.service.js';
import { DockerNetworkService } from '@app/unraid-api/graph/resolvers/docker/docker-network.service.js';
import { DockerPortService } from '@app/unraid-api/graph/resolvers/docker/docker-port.service.js';
import {
ContainerPortType,
ContainerState,
DockerContainer,
} from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { DockerService } from '@app/unraid-api/graph/resolvers/docker/docker.service.js';
import { NotificationsService } from '@app/unraid-api/graph/resolvers/notifications/notifications.service.js';
// Mock pubsub
vi.mock('@app/core/pubsub.js', () => ({
@@ -24,36 +35,58 @@ interface DockerError extends NodeJS.ErrnoException {
address: string;
}
const mockContainer = {
start: vi.fn(),
stop: vi.fn(),
};
const { mockDockerInstance, mockListContainers, mockGetContainer, mockListNetworks, mockContainer } =
vi.hoisted(() => {
const mockContainer = {
start: vi.fn(),
stop: vi.fn(),
pause: vi.fn(),
unpause: vi.fn(),
inspect: vi.fn(),
};
// Create properly typed mock functions
const mockListContainers = vi.fn();
const mockGetContainer = vi.fn().mockReturnValue(mockContainer);
const mockListNetworks = vi.fn();
const mockListContainers = vi.fn();
const mockGetContainer = vi.fn().mockReturnValue(mockContainer);
const mockListNetworks = vi.fn();
const mockDockerInstance = {
getContainer: mockGetContainer,
listContainers: mockListContainers,
listNetworks: mockListNetworks,
modem: {
Promise: Promise,
protocol: 'http',
socketPath: '/var/run/docker.sock',
headers: {},
sshOptions: {
agentForward: undefined,
},
},
} as unknown as Docker;
const mockDockerInstance = {
getContainer: mockGetContainer,
listContainers: mockListContainers,
listNetworks: mockListNetworks,
modem: {
Promise: Promise,
protocol: 'http',
socketPath: '/var/run/docker.sock',
headers: {},
sshOptions: {
agentForward: undefined,
},
},
} as unknown as Docker;
vi.mock('dockerode', () => {
return {
default: vi.fn().mockImplementation(() => mockDockerInstance),
};
});
return {
mockDockerInstance,
mockListContainers,
mockGetContainer,
mockListNetworks,
mockContainer,
};
});
vi.mock('@app/unraid-api/graph/resolvers/docker/utils/docker-client.js', () => ({
getDockerClient: vi.fn().mockReturnValue(mockDockerInstance),
}));
vi.mock('execa', () => ({
execa: vi.fn(),
}));
const { mockEmhttpGetter } = vi.hoisted(() => ({
mockEmhttpGetter: vi.fn().mockReturnValue({
networks: [],
var: {},
}),
}));
// Mock the store getters
vi.mock('@app/store/index.js', () => ({
@@ -61,15 +94,21 @@ vi.mock('@app/store/index.js', () => ({
docker: vi.fn().mockReturnValue({ containers: [] }),
paths: vi.fn().mockReturnValue({
'docker-autostart': '/path/to/docker-autostart',
'docker-userprefs': '/path/to/docker-userprefs',
'docker-socket': '/var/run/docker.sock',
'var-run': '/var/run',
}),
emhttp: mockEmhttpGetter,
},
}));
// Mock fs/promises
// Mock fs/promises (stat only)
const { statMock } = vi.hoisted(() => ({
statMock: vi.fn().mockResolvedValue({ size: 0 }),
}));
vi.mock('fs/promises', () => ({
readFile: vi.fn().mockResolvedValue(''),
stat: statMock,
}));
// Mock Cache Manager
@@ -79,6 +118,67 @@ const mockCacheManager = {
del: vi.fn(),
};
// Mock DockerConfigService
const mockDockerConfigService = {
getConfig: vi.fn().mockReturnValue({
updateCheckCronSchedule: '0 6 * * *',
templateMappings: {},
skipTemplatePaths: [],
}),
replaceConfig: vi.fn(),
validate: vi.fn((config) => Promise.resolve(config)),
};
const mockDockerManifestService = {
refreshDigests: vi.fn().mockResolvedValue(true),
getCachedUpdateStatuses: vi.fn().mockResolvedValue({}),
isUpdateAvailableCached: vi.fn().mockResolvedValue(false),
};
// Mock NotificationsService
const mockNotificationsService = {
notifyIfUnique: vi.fn().mockResolvedValue(null),
};
// Mock DockerAutostartService
const mockDockerAutostartService = {
refreshAutoStartEntries: vi.fn().mockResolvedValue(undefined),
getAutoStarts: vi.fn().mockResolvedValue([]),
getContainerPrimaryName: vi.fn((c) => {
if ('Names' in c) return c.Names[0]?.replace(/^\//, '') || null;
if ('names' in c) return c.names[0]?.replace(/^\//, '') || null;
return null;
}),
getAutoStartEntry: vi.fn(),
updateAutostartConfiguration: vi.fn().mockResolvedValue(undefined),
};
// Mock new services
const mockDockerLogService = {
getContainerLogSizes: vi.fn().mockResolvedValue(new Map([['test-container', 1024]])),
getContainerLogs: vi.fn().mockResolvedValue({ lines: [], cursor: null }),
};
const mockDockerNetworkService = {
getNetworks: vi.fn().mockResolvedValue([]),
};
// Use a real-ish mock for DockerPortService since it is used in transformContainer
const mockDockerPortService = {
deduplicateContainerPorts: vi.fn((ports) => {
if (!ports) return [];
// Simple dedupe logic for test
const seen = new Set();
return ports.filter((p) => {
const key = `${p.PrivatePort}-${p.PublicPort}-${p.Type}`;
if (seen.has(key)) return false;
seen.add(key);
return true;
});
}),
calculateConflicts: vi.fn().mockReturnValue({ containerPorts: [], lanPorts: [] }),
};
describe('DockerService', () => {
let service: DockerService;
@@ -88,9 +188,41 @@ describe('DockerService', () => {
mockListNetworks.mockReset();
mockContainer.start.mockReset();
mockContainer.stop.mockReset();
mockContainer.pause.mockReset();
mockContainer.unpause.mockReset();
mockContainer.inspect.mockReset();
mockCacheManager.get.mockReset();
mockCacheManager.set.mockReset();
mockCacheManager.del.mockReset();
statMock.mockReset();
statMock.mockResolvedValue({ size: 0 });
mockEmhttpGetter.mockReset();
mockEmhttpGetter.mockReturnValue({
networks: [],
var: {},
});
mockDockerConfigService.getConfig.mockReturnValue({
updateCheckCronSchedule: '0 6 * * *',
templateMappings: {},
skipTemplatePaths: [],
});
mockDockerManifestService.refreshDigests.mockReset();
mockDockerManifestService.refreshDigests.mockResolvedValue(true);
mockDockerAutostartService.refreshAutoStartEntries.mockReset();
mockDockerAutostartService.getAutoStarts.mockReset();
mockDockerAutostartService.getAutoStartEntry.mockReset();
mockDockerAutostartService.updateAutostartConfiguration.mockReset();
mockDockerLogService.getContainerLogSizes.mockReset();
mockDockerLogService.getContainerLogSizes.mockResolvedValue(new Map([['test-container', 1024]]));
mockDockerLogService.getContainerLogs.mockReset();
mockDockerNetworkService.getNetworks.mockReset();
mockDockerPortService.deduplicateContainerPorts.mockClear();
mockDockerPortService.calculateConflicts.mockReset();
const module: TestingModule = await Test.createTestingModule({
providers: [
@@ -99,6 +231,34 @@ describe('DockerService', () => {
provide: CACHE_MANAGER,
useValue: mockCacheManager,
},
{
provide: DockerConfigService,
useValue: mockDockerConfigService,
},
{
provide: DockerManifestService,
useValue: mockDockerManifestService,
},
{
provide: NotificationsService,
useValue: mockNotificationsService,
},
{
provide: DockerAutostartService,
useValue: mockDockerAutostartService,
},
{
provide: DockerLogService,
useValue: mockDockerLogService,
},
{
provide: DockerNetworkService,
useValue: mockDockerNetworkService,
},
{
provide: DockerPortService,
useValue: mockDockerPortService,
},
],
}).compile();
@@ -109,65 +269,6 @@ describe('DockerService', () => {
expect(service).toBeDefined();
});
it('should use separate cache keys for containers with and without size', async () => {
const mockContainersWithoutSize = [
{
Id: 'abc123',
Names: ['/test-container'],
Image: 'test-image',
ImageID: 'test-image-id',
Command: 'test',
Created: 1234567890,
State: 'exited',
Status: 'Exited',
Ports: [],
Labels: {},
HostConfig: { NetworkMode: 'bridge' },
NetworkSettings: {},
Mounts: [],
},
];
const mockContainersWithSize = [
{
Id: 'abc123',
Names: ['/test-container'],
Image: 'test-image',
ImageID: 'test-image-id',
Command: 'test',
Created: 1234567890,
State: 'exited',
Status: 'Exited',
Ports: [],
Labels: {},
HostConfig: { NetworkMode: 'bridge' },
NetworkSettings: {},
Mounts: [],
SizeRootFs: 1024000,
},
];
// First call without size
mockListContainers.mockResolvedValue(mockContainersWithoutSize);
mockCacheManager.get.mockResolvedValue(undefined);
await service.getContainers({ size: false });
expect(mockCacheManager.set).toHaveBeenCalledWith('docker_containers', expect.any(Array), 60000);
// Second call with size
mockListContainers.mockResolvedValue(mockContainersWithSize);
mockCacheManager.get.mockResolvedValue(undefined);
await service.getContainers({ size: true });
expect(mockCacheManager.set).toHaveBeenCalledWith(
'docker_containers_with_size',
expect.any(Array),
60000
);
});
it('should get containers', async () => {
const mockContainers = [
{
@@ -190,308 +291,100 @@ describe('DockerService', () => {
];
mockListContainers.mockResolvedValue(mockContainers);
mockCacheManager.get.mockResolvedValue(undefined); // Simulate cache miss
mockCacheManager.get.mockResolvedValue(undefined);
const result = await service.getContainers({ skipCache: true }); // Skip cache for direct fetch test
const result = await service.getContainers({ skipCache: true });
expect(result).toEqual([
expect(result).toEqual(
expect.arrayContaining([
expect.objectContaining({
id: 'abc123def456',
names: ['/test-container'],
}),
])
);
expect(mockListContainers).toHaveBeenCalled();
expect(mockDockerAutostartService.refreshAutoStartEntries).toHaveBeenCalled();
expect(mockDockerPortService.deduplicateContainerPorts).toHaveBeenCalled();
});
it('should update auto-start configuration', async () => {
mockListContainers.mockResolvedValue([
{
id: 'abc123def456',
autoStart: false,
command: 'test',
created: 1234567890,
image: 'test-image',
imageId: 'test-image-id',
ports: [],
sizeRootFs: undefined,
state: ContainerState.EXITED,
status: 'Exited',
labels: {},
hostConfig: {
networkMode: 'bridge',
},
networkSettings: {},
mounts: [],
names: ['/test-container'],
Id: 'abc123',
Names: ['/alpha'],
State: 'running',
},
]);
expect(mockListContainers).toHaveBeenCalledWith({
all: true,
size: false,
});
expect(mockCacheManager.set).toHaveBeenCalled(); // Ensure cache is set
});
const input = [{ id: 'abc123', autoStart: true, wait: 15 }];
await service.updateAutostartConfiguration(input, { persistUserPreferences: true });
it('should start container', async () => {
const mockContainers = [
{
Id: 'abc123def456',
Names: ['/test-container'],
Image: 'test-image',
ImageID: 'test-image-id',
Command: 'test',
Created: 1234567890,
State: 'running',
Status: 'Up 2 hours',
Ports: [],
Labels: {},
HostConfig: {
NetworkMode: 'bridge',
},
NetworkSettings: {},
Mounts: [],
},
];
mockListContainers.mockResolvedValue(mockContainers);
mockContainer.start.mockResolvedValue(undefined);
mockCacheManager.get.mockResolvedValue(undefined); // Simulate cache miss for getContainers call
const result = await service.start('abc123def456');
expect(result).toEqual({
id: 'abc123def456',
autoStart: false,
command: 'test',
created: 1234567890,
image: 'test-image',
imageId: 'test-image-id',
ports: [],
sizeRootFs: undefined,
state: ContainerState.RUNNING,
status: 'Up 2 hours',
labels: {},
hostConfig: {
networkMode: 'bridge',
},
networkSettings: {},
mounts: [],
names: ['/test-container'],
});
expect(mockContainer.start).toHaveBeenCalled();
expect(mockCacheManager.del).toHaveBeenCalledWith(DockerService.CONTAINER_CACHE_KEY);
expect(mockListContainers).toHaveBeenCalled();
expect(mockCacheManager.set).toHaveBeenCalled();
expect(pubsub.publish).toHaveBeenCalledWith(PUBSUB_CHANNEL.INFO, {
info: {
apps: { installed: 1, running: 1 },
},
});
});
it('should stop container', async () => {
const mockContainers = [
{
Id: 'abc123def456',
Names: ['/test-container'],
Image: 'test-image',
ImageID: 'test-image-id',
Command: 'test',
Created: 1234567890,
State: 'exited',
Status: 'Exited',
Ports: [],
Labels: {},
HostConfig: {
NetworkMode: 'bridge',
},
NetworkSettings: {},
Mounts: [],
},
];
mockListContainers.mockResolvedValue(mockContainers);
mockContainer.stop.mockResolvedValue(undefined);
mockCacheManager.get.mockResolvedValue(undefined); // Simulate cache miss for getContainers calls
const result = await service.stop('abc123def456');
expect(result).toEqual({
id: 'abc123def456',
autoStart: false,
command: 'test',
created: 1234567890,
image: 'test-image',
imageId: 'test-image-id',
ports: [],
sizeRootFs: undefined,
state: ContainerState.EXITED,
status: 'Exited',
labels: {},
hostConfig: {
networkMode: 'bridge',
},
networkSettings: {},
mounts: [],
names: ['/test-container'],
});
expect(mockContainer.stop).toHaveBeenCalledWith({ t: 10 });
expect(mockCacheManager.del).toHaveBeenCalledWith(DockerService.CONTAINER_CACHE_KEY);
expect(mockListContainers).toHaveBeenCalled();
expect(mockCacheManager.set).toHaveBeenCalled();
expect(pubsub.publish).toHaveBeenCalledWith(PUBSUB_CHANNEL.INFO, {
info: {
apps: { installed: 1, running: 0 },
},
});
});
it('should throw error if container not found after start', async () => {
mockListContainers.mockResolvedValue([]);
mockContainer.start.mockResolvedValue(undefined);
mockCacheManager.get.mockResolvedValue(undefined);
await expect(service.start('not-found')).rejects.toThrow(
'Container not-found not found after starting'
expect(mockDockerAutostartService.updateAutostartConfiguration).toHaveBeenCalledWith(
input,
expect.any(Array),
{ persistUserPreferences: true }
);
expect(mockCacheManager.del).toHaveBeenCalledWith(DockerService.CONTAINER_CACHE_KEY);
});
it('should throw error if container not found after stop', async () => {
mockListContainers.mockResolvedValue([]);
mockContainer.stop.mockResolvedValue(undefined);
mockCacheManager.get.mockResolvedValue(undefined);
await expect(service.stop('not-found')).rejects.toThrow(
'Container not-found not found after stopping'
);
expect(mockCacheManager.del).toHaveBeenCalledWith(DockerService.CONTAINER_CACHE_KEY);
});
it('should get networks', async () => {
const mockNetworks = [
{
Id: 'network1',
Name: 'bridge',
Created: '2023-01-01T00:00:00Z',
Scope: 'local',
Driver: 'bridge',
EnableIPv6: false,
IPAM: {
Driver: 'default',
Config: [
{
Subnet: '172.17.0.0/16',
Gateway: '172.17.0.1',
},
],
},
Internal: false,
Attachable: false,
Ingress: false,
ConfigFrom: {
Network: '',
},
ConfigOnly: false,
Containers: {},
Options: {
'com.docker.network.bridge.default_bridge': 'true',
'com.docker.network.bridge.enable_icc': 'true',
'com.docker.network.bridge.enable_ip_masquerade': 'true',
'com.docker.network.bridge.host_binding_ipv4': '0.0.0.0',
'com.docker.network.bridge.name': 'docker0',
'com.docker.network.driver.mtu': '1500',
},
Labels: {},
},
];
mockListNetworks.mockResolvedValue(mockNetworks);
mockCacheManager.get.mockResolvedValue(undefined); // Simulate cache miss
const result = await service.getNetworks({ skipCache: true }); // Skip cache for direct fetch test
expect(result).toMatchInlineSnapshot(`
[
{
"attachable": false,
"configFrom": {
"Network": "",
},
"configOnly": false,
"containers": {},
"created": "2023-01-01T00:00:00Z",
"driver": "bridge",
"enableIPv6": false,
"id": "network1",
"ingress": false,
"internal": false,
"ipam": {
"Config": [
{
"Gateway": "172.17.0.1",
"Subnet": "172.17.0.0/16",
},
],
"Driver": "default",
},
"labels": {},
"name": "bridge",
"options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500",
},
"scope": "local",
},
]
`);
expect(mockListNetworks).toHaveBeenCalled();
expect(mockCacheManager.set).toHaveBeenCalled(); // Ensure cache is set
});
it('should handle empty networks list', async () => {
mockListNetworks.mockResolvedValue([]);
mockCacheManager.get.mockResolvedValue(undefined); // Simulate cache miss
const result = await service.getNetworks({ skipCache: true }); // Skip cache for direct fetch test
expect(result).toEqual([]);
expect(mockListNetworks).toHaveBeenCalled();
expect(mockCacheManager.set).toHaveBeenCalled(); // Ensure cache is set
});
it('should handle docker error when getting networks', async () => {
const error = new Error('Docker error') as DockerError;
error.code = 'ENOENT';
error.address = '/var/run/docker.sock';
mockListNetworks.mockRejectedValue(error);
mockCacheManager.get.mockResolvedValue(undefined); // Simulate cache miss
await expect(service.getNetworks({ skipCache: true })).rejects.toThrow(
'Docker socket unavailable.'
);
expect(mockListNetworks).toHaveBeenCalled();
expect(mockCacheManager.set).not.toHaveBeenCalled(); // Ensure cache is NOT set on error
it('should delegate getContainerLogSizes to DockerLogService', async () => {
const sizes = await service.getContainerLogSizes(['test-container']);
expect(mockDockerLogService.getContainerLogSizes).toHaveBeenCalledWith(['test-container']);
expect(sizes.get('test-container')).toBe(1024);
});
describe('getAppInfo', () => {
// Common mock containers for these tests
const mockContainersForMethods = [
{ id: 'abc1', state: ContainerState.RUNNING },
{ id: 'def2', state: ContainerState.EXITED },
] as DockerContainer[];
it('should return correct app info object', async () => {
// Mock cache response for getContainers call
mockCacheManager.get.mockResolvedValue(mockContainersForMethods);
const result = await service.getAppInfo(); // Call the renamed method
const result = await service.getAppInfo();
expect(result).toEqual({
info: {
apps: { installed: 2, running: 1 },
},
});
// getContainers should now be called only ONCE from cache
expect(mockCacheManager.get).toHaveBeenCalledTimes(1);
expect(mockCacheManager.get).toHaveBeenCalledWith(DockerService.CONTAINER_CACHE_KEY);
});
});
describe('transformContainer', () => {
it('deduplicates ports that only differ by bound IP addresses', () => {
mockEmhttpGetter.mockReturnValue({
networks: [{ ipaddr: ['192.168.0.10'] }],
var: {},
});
const container = {
Id: 'duplicate-ports',
Names: ['/duplicate-ports'],
Image: 'test-image',
ImageID: 'sha256:123',
Command: 'test',
Created: 1700000000,
State: 'running',
Status: 'Up 2 hours',
Ports: [
{ IP: '0.0.0.0', PrivatePort: 8080, PublicPort: 8080, Type: 'tcp' },
{ IP: '::', PrivatePort: 8080, PublicPort: 8080, Type: 'tcp' },
{ IP: '0.0.0.0', PrivatePort: 5000, PublicPort: 5000, Type: 'udp' },
],
Labels: {},
HostConfig: { NetworkMode: 'bridge' },
NetworkSettings: { Networks: {} },
Mounts: [],
} as Docker.ContainerInfo;
service.transformContainer(container);
expect(mockDockerPortService.deduplicateContainerPorts).toHaveBeenCalledWith(
container.Ports
);
});
});
});

View File

@@ -1,20 +1,30 @@
import { CACHE_MANAGER } from '@nestjs/cache-manager';
import { Inject, Injectable, Logger, OnModuleInit } from '@nestjs/common';
import { readFile } from 'fs/promises';
import { Inject, Injectable, Logger } from '@nestjs/common';
import { type Cache } from 'cache-manager';
import Docker from 'dockerode';
import { execa } from 'execa';
import { pubsub, PUBSUB_CHANNEL } from '@app/core/pubsub.js';
import { catchHandlers } from '@app/core/utils/misc/catch-handlers.js';
import { sleep } from '@app/core/utils/misc/sleep.js';
import { getters } from '@app/store/index.js';
import { getLanIp } from '@app/core/utils/network.js';
import { DockerAutostartService } from '@app/unraid-api/graph/resolvers/docker/docker-autostart.service.js';
import { DockerConfigService } from '@app/unraid-api/graph/resolvers/docker/docker-config.service.js';
import { DockerLogService } from '@app/unraid-api/graph/resolvers/docker/docker-log.service.js';
import { DockerManifestService } from '@app/unraid-api/graph/resolvers/docker/docker-manifest.service.js';
import { DockerNetworkService } from '@app/unraid-api/graph/resolvers/docker/docker-network.service.js';
import { DockerPortService } from '@app/unraid-api/graph/resolvers/docker/docker-port.service.js';
import {
ContainerPortType,
ContainerState,
DockerAutostartEntryInput,
DockerContainer,
DockerContainerLogs,
DockerNetwork,
DockerPortConflicts,
} from '@app/unraid-api/graph/resolvers/docker/docker.model.js';
import { getDockerClient } from '@app/unraid-api/graph/resolvers/docker/utils/docker-client.js';
interface ContainerListingOptions extends Docker.ContainerListOptions {
skipCache: boolean;
@@ -27,25 +37,26 @@ interface NetworkListingOptions {
@Injectable()
export class DockerService {
private client: Docker;
private autoStarts: string[] = [];
private readonly logger = new Logger(DockerService.name);
public static readonly CONTAINER_CACHE_KEY = 'docker_containers';
public static readonly CONTAINER_WITH_SIZE_CACHE_KEY = 'docker_containers_with_size';
public static readonly NETWORK_CACHE_KEY = 'docker_networks';
public static readonly CACHE_TTL_SECONDS = 60; // Cache for 60 seconds
public static readonly CACHE_TTL_SECONDS = 60;
constructor(@Inject(CACHE_MANAGER) private cacheManager: Cache) {
this.client = this.getDockerClient();
constructor(
@Inject(CACHE_MANAGER) private cacheManager: Cache,
private readonly dockerConfigService: DockerConfigService,
private readonly dockerManifestService: DockerManifestService,
private readonly autostartService: DockerAutostartService,
private readonly dockerLogService: DockerLogService,
private readonly dockerNetworkService: DockerNetworkService,
private readonly dockerPortService: DockerPortService
) {
this.client = getDockerClient();
}
public getDockerClient() {
return new Docker({
socketPath: '/var/run/docker.sock',
});
}
async getAppInfo() {
public async getAppInfo() {
const containers = await this.getContainers({ skipCache: false });
const installedCount = containers.length;
const runningCount = containers.filter(
@@ -65,31 +76,47 @@ export class DockerService {
* @see https://github.com/limetech/webgui/issues/502#issue-480992547
*/
public async getAutoStarts(): Promise<string[]> {
const autoStartFile = await readFile(getters.paths()['docker-autostart'], 'utf8')
.then((file) => file.toString())
.catch(() => '');
return autoStartFile.split('\n');
return this.autostartService.getAutoStarts();
}
public transformContainer(container: Docker.ContainerInfo): DockerContainer {
public transformContainer(container: Docker.ContainerInfo): Omit<DockerContainer, 'isOrphaned'> {
const sizeValue = (container as Docker.ContainerInfo & { SizeRootFs?: number }).SizeRootFs;
const primaryName = this.autostartService.getContainerPrimaryName(container) ?? '';
const autoStartEntry = primaryName
? this.autostartService.getAutoStartEntry(primaryName)
: undefined;
const lanIp = getLanIp();
const lanPortStrings: string[] = [];
const uniquePorts = this.dockerPortService.deduplicateContainerPorts(container.Ports);
const transformed: DockerContainer = {
const transformedPorts = uniquePorts.map((port) => {
if (port.PublicPort) {
const lanPort = lanIp ? `${lanIp}:${port.PublicPort}` : `${port.PublicPort}`;
if (lanPort) {
lanPortStrings.push(lanPort);
}
}
return {
ip: port.IP || '',
privatePort: port.PrivatePort,
publicPort: port.PublicPort,
type:
ContainerPortType[
(port.Type || 'tcp').toUpperCase() as keyof typeof ContainerPortType
] || ContainerPortType.TCP,
};
});
const transformed: Omit<DockerContainer, 'isOrphaned'> = {
id: container.Id,
names: container.Names,
image: container.Image,
imageId: container.ImageID,
command: container.Command,
created: container.Created,
ports: container.Ports.map((port) => ({
ip: port.IP || '',
privatePort: port.PrivatePort,
publicPort: port.PublicPort,
type:
ContainerPortType[port.Type.toUpperCase() as keyof typeof ContainerPortType] ||
ContainerPortType.TCP,
})),
ports: transformedPorts,
sizeRootFs: sizeValue,
sizeRw: (container as Docker.ContainerInfo & { SizeRw?: number }).SizeRw,
labels: container.Labels ?? {},
state:
typeof container.State === 'string'
@@ -102,9 +129,15 @@ export class DockerService {
},
networkSettings: container.NetworkSettings,
mounts: container.Mounts,
autoStart: this.autoStarts.includes(container.Names[0].split('/')[1]),
autoStart: Boolean(autoStartEntry),
autoStartOrder: autoStartEntry?.order,
autoStartWait: autoStartEntry?.wait,
};
if (lanPortStrings.length > 0) {
transformed.lanIpPorts = lanPortStrings;
}
return transformed;
}
@@ -129,66 +162,65 @@ export class DockerService {
}
this.logger.debug(`Updating docker container cache (${size ? 'with' : 'without'} size)`);
const rawContainers =
(await this.client
.listContainers({
all,
size,
...listOptions,
})
.catch(catchHandlers.docker)) ?? [];
let rawContainers: Docker.ContainerInfo[] = [];
try {
rawContainers = await this.client.listContainers({
all,
size,
...listOptions,
});
} catch (error) {
this.handleDockerListError(error);
}
this.autoStarts = await this.getAutoStarts();
await this.autostartService.refreshAutoStartEntries();
const containers = rawContainers.map((container) => this.transformContainer(container));
await this.cacheManager.set(cacheKey, containers, DockerService.CACHE_TTL_SECONDS * 1000);
return containers;
const config = this.dockerConfigService.getConfig();
const containersWithTemplatePaths = containers.map((c) => {
const containerName = c.names[0]?.replace(/^\//, '').toLowerCase() ?? '';
const templatePath = config.templateMappings?.[containerName] || undefined;
return {
...c,
templatePath,
isOrphaned: !templatePath,
};
});
await this.cacheManager.set(
cacheKey,
containersWithTemplatePaths,
DockerService.CACHE_TTL_SECONDS * 1000
);
return containersWithTemplatePaths;
}
public async getPortConflicts({
skipCache = false,
}: {
skipCache?: boolean;
} = {}): Promise<DockerPortConflicts> {
const containers = await this.getContainers({ skipCache });
return this.dockerPortService.calculateConflicts(containers);
}
public async getContainerLogSizes(containerNames: string[]): Promise<Map<string, number>> {
return this.dockerLogService.getContainerLogSizes(containerNames);
}
public async getContainerLogs(
id: string,
options?: { since?: Date | null; tail?: number | null }
): Promise<DockerContainerLogs> {
return this.dockerLogService.getContainerLogs(id, options);
}
/**
* Get all Docker networks
* @returns All the in/active Docker networks on the system.
*/
public async getNetworks({ skipCache }: NetworkListingOptions): Promise<DockerNetwork[]> {
if (!skipCache) {
const cachedNetworks = await this.cacheManager.get<DockerNetwork[]>(
DockerService.NETWORK_CACHE_KEY
);
if (cachedNetworks) {
this.logger.debug('Using docker network cache');
return cachedNetworks;
}
}
this.logger.debug('Updating docker network cache');
const rawNetworks = await this.client.listNetworks().catch(catchHandlers.docker);
const networks = rawNetworks.map(
(network) =>
({
name: network.Name || '',
id: network.Id || '',
created: network.Created || '',
scope: network.Scope || '',
driver: network.Driver || '',
enableIPv6: network.EnableIPv6 || false,
ipam: network.IPAM || {},
internal: network.Internal || false,
attachable: network.Attachable || false,
ingress: network.Ingress || false,
configFrom: network.ConfigFrom || {},
configOnly: network.ConfigOnly || false,
containers: network.Containers || {},
options: network.Options || {},
labels: network.Labels || {},
}) as DockerNetwork
);
await this.cacheManager.set(
DockerService.NETWORK_CACHE_KEY,
networks,
DockerService.CACHE_TTL_SECONDS * 1000
);
return networks;
public async getNetworks(options: NetworkListingOptions): Promise<DockerNetwork[]> {
return this.dockerNetworkService.getNetworks(options);
}
public async clearContainerCache(): Promise<void> {
@@ -214,6 +246,45 @@ export class DockerService {
return updatedContainer;
}
public async removeContainer(id: string, options?: { withImage?: boolean }): Promise<boolean> {
const container = this.client.getContainer(id);
try {
const inspectData = options?.withImage ? await container.inspect() : null;
const imageId = inspectData?.Image;
await container.remove({ force: true });
this.logger.debug(`Removed container ${id}`);
if (options?.withImage && imageId) {
try {
const image = this.client.getImage(imageId);
await image.remove({ force: true });
this.logger.debug(`Removed image ${imageId} for container ${id}`);
} catch (imageError) {
this.logger.warn(`Failed to remove image ${imageId}:`, imageError);
}
}
await this.clearContainerCache();
this.logger.debug(`Invalidated container caches after removing ${id}`);
const appInfo = await this.getAppInfo();
await pubsub.publish(PUBSUB_CHANNEL.INFO, appInfo);
return true;
} catch (error) {
this.logger.error(`Failed to remove container ${id}:`, error);
throw new Error(`Failed to remove container ${id}`);
}
}
public async updateAutostartConfiguration(
entries: DockerAutostartEntryInput[],
options?: { persistUserPreferences?: boolean }
): Promise<void> {
const containers = await this.getContainers({ skipCache: true });
await this.autostartService.updateAutostartConfiguration(entries, containers, options);
await this.clearContainerCache();
}
public async stop(id: string): Promise<DockerContainer> {
const container = this.client.getContainer(id);
await container.stop({ t: 10 });
@@ -243,4 +314,162 @@ export class DockerService {
await pubsub.publish(PUBSUB_CHANNEL.INFO, appInfo);
return updatedContainer;
}
public async pause(id: string): Promise<DockerContainer> {
const container = this.client.getContainer(id);
await container.pause();
await this.cacheManager.del(DockerService.CONTAINER_CACHE_KEY);
this.logger.debug(`Invalidated container cache after pausing ${id}`);
let containers = await this.getContainers({ skipCache: true });
let updatedContainer: DockerContainer | undefined;
for (let i = 0; i < 5; i++) {
await sleep(500);
containers = await this.getContainers({ skipCache: true });
updatedContainer = containers.find((c) => c.id === id);
this.logger.debug(
`Container ${id} state after pause attempt ${i + 1}: ${updatedContainer?.state}`
);
if (updatedContainer?.state === ContainerState.PAUSED) {
break;
}
}
if (!updatedContainer) {
throw new Error(`Container ${id} not found after pausing`);
}
const appInfo = await this.getAppInfo();
await pubsub.publish(PUBSUB_CHANNEL.INFO, appInfo);
return updatedContainer;
}
public async unpause(id: string): Promise<DockerContainer> {
const container = this.client.getContainer(id);
await container.unpause();
await this.cacheManager.del(DockerService.CONTAINER_CACHE_KEY);
this.logger.debug(`Invalidated container cache after unpausing ${id}`);
let containers = await this.getContainers({ skipCache: true });
let updatedContainer: DockerContainer | undefined;
for (let i = 0; i < 5; i++) {
await sleep(500);
containers = await this.getContainers({ skipCache: true });
updatedContainer = containers.find((c) => c.id === id);
this.logger.debug(
`Container ${id} state after unpause attempt ${i + 1}: ${updatedContainer?.state}`
);
if (updatedContainer?.state === ContainerState.RUNNING) {
break;
}
}
if (!updatedContainer) {
throw new Error(`Container ${id} not found after unpausing`);
}
const appInfo = await this.getAppInfo();
await pubsub.publish(PUBSUB_CHANNEL.INFO, appInfo);
return updatedContainer;
}
public async updateContainer(id: string): Promise<DockerContainer> {
const containers = await this.getContainers({ skipCache: true });
const container = containers.find((c) => c.id === id);
if (!container) {
throw new Error(`Container ${id} not found`);
}
const containerName = container.names?.[0]?.replace(/^\//, '');
if (!containerName) {
throw new Error(`Container ${id} has no name`);
}
this.logger.log(`Updating container ${containerName} (${id})`);
try {
await execa(
'/usr/local/emhttp/plugins/dynamix.docker.manager/scripts/update_container',
[encodeURIComponent(containerName)],
{ shell: 'bash' }
);
} catch (error) {
this.logger.error(`Failed to update container ${containerName}:`, error);
throw new Error(`Failed to update container ${containerName}`);
}
await this.clearContainerCache();
this.logger.debug(`Invalidated container caches after updating ${id}`);
const updatedContainers = await this.getContainers({ skipCache: true });
const updatedContainer = updatedContainers.find(
(c) => c.names?.some((name) => name.replace(/^\//, '') === containerName) || c.id === id
);
if (!updatedContainer) {
throw new Error(`Container ${id} not found after update`);
}
const appInfo = await this.getAppInfo();
await pubsub.publish(PUBSUB_CHANNEL.INFO, appInfo);
return updatedContainer;
}
public async updateContainers(ids: string[]): Promise<DockerContainer[]> {
const uniqueIds = Array.from(new Set(ids.filter((id) => typeof id === 'string' && id.length)));
const updatedContainers: DockerContainer[] = [];
for (const id of uniqueIds) {
const updated = await this.updateContainer(id);
updatedContainers.push(updated);
}
return updatedContainers;
}
/**
* Updates every container with an available update. Mirrors the legacy webgui "Update All" flow.
*/
public async updateAllContainers(): Promise<DockerContainer[]> {
const containers = await this.getContainers({ skipCache: true });
if (!containers.length) {
return [];
}
const cachedStatuses = await this.dockerManifestService.getCachedUpdateStatuses();
const idsWithUpdates: string[] = [];
for (const container of containers) {
if (!container.image) {
continue;
}
const hasUpdate = await this.dockerManifestService.isUpdateAvailableCached(
container.image,
cachedStatuses
);
if (hasUpdate) {
idsWithUpdates.push(container.id);
}
}
if (!idsWithUpdates.length) {
this.logger.log('Update-all requested but no containers have available updates');
return [];
}
this.logger.log(`Updating ${idsWithUpdates.length} container(s) via updateAllContainers`);
return this.updateContainers(idsWithUpdates);
}
private handleDockerListError(error: unknown): never {
const message = this.getDockerErrorMessage(error);
this.logger.warn(`Docker container query failed: ${message}`);
catchHandlers.docker(error as NodeJS.ErrnoException);
throw error instanceof Error ? error : new Error('Docker list error');
}
private getDockerErrorMessage(error: unknown): string {
if (error instanceof Error && error.message) {
return error.message;
}
if (typeof error === 'string' && error.length) {
return error;
}
return 'Unknown error occurred.';
}
}

View File

@@ -2,6 +2,7 @@ import { Test } from '@nestjs/testing';
import { beforeEach, describe, expect, it, vi } from 'vitest';
import { DockerTemplateIconService } from '@app/unraid-api/graph/resolvers/docker/docker-template-icon.service.js';
import {
ContainerPortType,
ContainerState,
@@ -38,6 +39,7 @@ describe('containerToResource', () => {
labels: {
'com.docker.compose.service': 'web',
},
isOrphaned: false,
};
const result = containerToResource(container);
@@ -62,6 +64,7 @@ describe('containerToResource', () => {
state: ContainerState.EXITED,
status: 'Exited (0) 1 hour ago',
autoStart: false,
isOrphaned: false,
};
const result = containerToResource(container);
@@ -83,6 +86,7 @@ describe('containerToResource', () => {
state: ContainerState.EXITED,
status: 'Exited (0) 5 minutes ago',
autoStart: false,
isOrphaned: false,
};
const result = containerToResource(container);
@@ -124,6 +128,7 @@ describe('containerToResource', () => {
maintainer: 'dev-team',
version: '1.0.0',
},
isOrphaned: false,
};
const result = containerToResource(container);
@@ -216,6 +221,12 @@ describe('DockerOrganizerService', () => {
]),
},
},
{
provide: DockerTemplateIconService,
useValue: {
getIconsForContainers: vi.fn().mockResolvedValue(new Map()),
},
},
],
}).compile();
@@ -674,16 +685,31 @@ describe('DockerOrganizerService', () => {
const TO_DELETE = ['entryB', 'entryD'];
const EXPECTED_REMAINING = ['entryA', 'entryC'];
// Mock getContainers to return containers matching our test entries
const mockContainers = ENTRIES.map((entryId, i) => ({
id: `container-${entryId}`,
names: [`/${entryId}`],
image: 'test:latest',
imageId: `sha256:${i}`,
command: 'test',
created: 1640995200 + i,
ports: [],
state: 'running',
status: 'Up 1 hour',
autoStart: true,
}));
(dockerService.getContainers as any).mockResolvedValue(mockContainers);
const organizerWithOrdering = createTestOrganizer();
const rootFolder = getRootFolder(organizerWithOrdering);
rootFolder.children = [...ENTRIES];
// Create the test entries
// Create refs pointing to the container names (which will be /{entryId})
ENTRIES.forEach((entryId) => {
organizerWithOrdering.views.default.entries[entryId] = {
id: entryId,
type: 'ref',
target: `target_${entryId}`,
target: `/${entryId}`,
};
});

View File

@@ -9,10 +9,13 @@ import { DockerOrganizerConfigService } from '@app/unraid-api/graph/resolvers/do
import {
addMissingResourcesToView,
createFolderInView,
createFolderWithItems,
DEFAULT_ORGANIZER_ROOT_ID,
DEFAULT_ORGANIZER_VIEW_ID,
deleteOrganizerEntries,
moveEntriesToFolder,
moveItemsToPosition,
renameFolder,
resolveOrganizer,
setFolderChildrenInView,
} from '@app/unraid-api/organizer/organizer.js';
@@ -51,8 +54,14 @@ export class DockerOrganizerService {
private readonly dockerService: DockerService
) {}
async getResources(opts?: ContainerListOptions): Promise<OrganizerV1['resources']> {
const containers = await this.dockerService.getContainers(opts);
async getResources(
opts?: Partial<ContainerListOptions> & { skipCache?: boolean }
): Promise<OrganizerV1['resources']> {
const { skipCache = false, ...listOptions } = opts ?? {};
const containers = await this.dockerService.getContainers({
skipCache,
...(listOptions as any),
});
return containerListToResourcesObject(containers);
}
@@ -74,17 +83,20 @@ export class DockerOrganizerService {
return newOrganizer;
}
async syncAndGetOrganizer(): Promise<OrganizerV1> {
async syncAndGetOrganizer(opts?: { skipCache?: boolean }): Promise<OrganizerV1> {
let organizer = this.dockerConfigService.getConfig();
organizer.resources = await this.getResources();
organizer.resources = await this.getResources(opts);
organizer = await this.syncDefaultView(organizer, organizer.resources);
organizer = await this.dockerConfigService.validate(organizer);
this.dockerConfigService.replaceConfig(organizer);
return organizer;
}
async resolveOrganizer(organizer?: OrganizerV1): Promise<ResolvedOrganizerV1> {
organizer ??= await this.syncAndGetOrganizer();
async resolveOrganizer(
organizer?: OrganizerV1,
opts?: { skipCache?: boolean }
): Promise<ResolvedOrganizerV1> {
organizer ??= await this.syncAndGetOrganizer(opts);
return resolveOrganizer(organizer);
}
@@ -192,7 +204,10 @@ export class DockerOrganizerService {
const newOrganizer = structuredClone(organizer);
deleteOrganizerEntries(newOrganizer.views.default, entryIds, { mutate: true });
addMissingResourcesToView(newOrganizer.resources, newOrganizer.views.default);
newOrganizer.views.default = addMissingResourcesToView(
newOrganizer.resources,
newOrganizer.views.default
);
const validated = await this.dockerConfigService.validate(newOrganizer);
this.dockerConfigService.replaceConfig(validated);
@@ -222,4 +237,119 @@ export class DockerOrganizerService {
this.dockerConfigService.replaceConfig(validated);
return validated;
}
async moveItemsToPosition(params: {
sourceEntryIds: string[];
destinationFolderId: string;
position: number;
}): Promise<OrganizerV1> {
const { sourceEntryIds, destinationFolderId, position } = params;
const organizer = await this.syncAndGetOrganizer();
const newOrganizer = structuredClone(organizer);
const defaultView = newOrganizer.views.default;
if (!defaultView) {
throw new AppError('Default view not found');
}
newOrganizer.views.default = moveItemsToPosition({
view: defaultView,
sourceEntryIds: new Set(sourceEntryIds),
destinationFolderId,
position,
resources: newOrganizer.resources,
});
const validated = await this.dockerConfigService.validate(newOrganizer);
this.dockerConfigService.replaceConfig(validated);
return validated;
}
async renameFolderById(params: { folderId: string; newName: string }): Promise<OrganizerV1> {
const { folderId, newName } = params;
const organizer = await this.syncAndGetOrganizer();
const newOrganizer = structuredClone(organizer);
const defaultView = newOrganizer.views.default;
if (!defaultView) {
throw new AppError('Default view not found');
}
newOrganizer.views.default = renameFolder({
view: defaultView,
folderId,
newName,
});
const validated = await this.dockerConfigService.validate(newOrganizer);
this.dockerConfigService.replaceConfig(validated);
return validated;
}
async createFolderWithItems(params: {
name: string;
parentId?: string;
sourceEntryIds?: string[];
position?: number;
}): Promise<OrganizerV1> {
const { name, parentId = DEFAULT_ORGANIZER_ROOT_ID, sourceEntryIds = [], position } = params;
if (name === DEFAULT_ORGANIZER_ROOT_ID) {
throw new AppError(`Folder name '${name}' is reserved`);
} else if (name === parentId) {
throw new AppError(`Folder ID '${name}' cannot be the same as the parent ID`);
} else if (!name) {
throw new AppError(`Folder name cannot be empty`);
}
const organizer = await this.syncAndGetOrganizer();
const defaultView = organizer.views.default;
if (!defaultView) {
throw new AppError('Default view not found');
}
const parentEntry = defaultView.entries[parentId];
if (!parentEntry || parentEntry.type !== 'folder') {
throw new AppError(`Parent '${parentId}' not found or is not a folder`);
}
if (parentEntry.children.includes(name)) {
return organizer;
}
const newOrganizer = structuredClone(organizer);
newOrganizer.views.default = createFolderWithItems({
view: defaultView,
folderId: name,
folderName: name,
parentId,
sourceEntryIds,
position,
resources: newOrganizer.resources,
});
const validated = await this.dockerConfigService.validate(newOrganizer);
this.dockerConfigService.replaceConfig(validated);
return validated;
}
async updateViewPreferences(params: {
viewId?: string;
prefs: Record<string, unknown>;
}): Promise<OrganizerV1> {
const { viewId = DEFAULT_ORGANIZER_VIEW_ID, prefs } = params;
const organizer = await this.syncAndGetOrganizer();
const newOrganizer = structuredClone(organizer);
const view = newOrganizer.views[viewId];
if (!view) {
throw new AppError(`View '${viewId}' not found`);
}
view.prefs = prefs;
const validated = await this.dockerConfigService.validate(newOrganizer);
this.dockerConfigService.replaceConfig(validated);
return validated;
}
}

View File

@@ -0,0 +1,12 @@
import Docker from 'dockerode';
let instance: Docker | undefined;
export function getDockerClient(): Docker {
if (!instance) {
instance = new Docker({
socketPath: '/var/run/docker.sock',
});
}
return instance;
}

View File

@@ -24,6 +24,11 @@ export class VmMutations {}
})
export class ApiKeyMutations {}
@ObjectType({
description: 'Customization related mutations',
})
export class CustomizationMutations {}
@ObjectType({
description: 'Parity check related mutations, WIP, response types and functionaliy will change',
})
@@ -54,6 +59,9 @@ export class RootMutations {
@Field(() => ApiKeyMutations, { description: 'API Key related mutations' })
apiKey: ApiKeyMutations = new ApiKeyMutations();
@Field(() => CustomizationMutations, { description: 'Customization related mutations' })
customization: CustomizationMutations = new CustomizationMutations();
@Field(() => ParityCheckMutations, { description: 'Parity check related mutations' })
parityCheck: ParityCheckMutations = new ParityCheckMutations();

View File

@@ -3,6 +3,7 @@ import { Mutation, Resolver } from '@nestjs/graphql';
import {
ApiKeyMutations,
ArrayMutations,
CustomizationMutations,
DockerMutations,
ParityCheckMutations,
RCloneMutations,
@@ -37,6 +38,11 @@ export class RootMutationsResolver {
return new ApiKeyMutations();
}
@Mutation(() => CustomizationMutations, { name: 'customization' })
customization(): CustomizationMutations {
return new CustomizationMutations();
}
@Mutation(() => RCloneMutations, { name: 'rclone' })
rclone(): RCloneMutations {
return new RCloneMutations();

View File

@@ -164,4 +164,10 @@ export class Notifications extends Node {
@Field(() => [Notification])
@IsNotEmpty()
list!: Notification[];
@Field(() => [Notification], {
description: 'Deduplicated list of unread warning and alert notifications, sorted latest first.',
})
@IsNotEmpty()
warningsAndAlerts!: Notification[];
}

View File

@@ -0,0 +1,9 @@
import { Module } from '@nestjs/common';
import { NotificationsService } from '@app/unraid-api/graph/resolvers/notifications/notifications.service.js';
@Module({
providers: [NotificationsService],
exports: [NotificationsService],
})
export class NotificationsModule {}

View File

@@ -49,6 +49,13 @@ export class NotificationsResolver {
return await this.notificationsService.getNotifications(filters);
}
@ResolveField(() => [Notification], {
description: 'Deduplicated list of unread warning and alert notifications.',
})
public async warningsAndAlerts(): Promise<Notification[]> {
return this.notificationsService.getWarningsAndAlerts();
}
/**============================================
* Mutations
*=============================================**/
@@ -96,6 +103,18 @@ export class NotificationsResolver {
return this.notificationsService.getOverview();
}
@Mutation(() => Notification, {
nullable: true,
description:
'Creates a notification if an equivalent unread notification does not already exist.',
})
public notifyIfUnique(
@Args('input', { type: () => NotificationData })
data: NotificationData
): Promise<Notification | null> {
return this.notificationsService.notifyIfUnique(data);
}
@Mutation(() => NotificationOverview)
public async archiveAll(
@Args('importance', { type: () => NotificationImportance, nullable: true })
@@ -163,4 +182,13 @@ export class NotificationsResolver {
async notificationsOverview() {
return createSubscription(PUBSUB_CHANNEL.NOTIFICATION_OVERVIEW);
}
@Subscription(() => [Notification])
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.NOTIFICATIONS,
})
async notificationsWarningsAndAlerts() {
return createSubscription(PUBSUB_CHANNEL.NOTIFICATION_WARNINGS_AND_ALERTS);
}
}

View File

@@ -289,6 +289,112 @@ describe.sequential('NotificationsService', () => {
expect(loaded.length).toEqual(3);
});
describe('getWarningsAndAlerts', () => {
it('deduplicates unread warning and alert notifications', async ({ expect }) => {
const duplicateData = {
title: 'Array Status',
subject: 'Disk 1 is getting warm',
description: 'Disk temperature has exceeded threshold.',
importance: NotificationImportance.WARNING,
} as const;
// Create duplicate warnings and an alert with different content
await createNotification(duplicateData);
await createNotification(duplicateData);
await createNotification({
title: 'UPS Disconnected',
subject: 'The UPS connection has been lost',
description: 'Reconnect the UPS to restore protection.',
importance: NotificationImportance.ALERT,
});
await createNotification({
title: 'Parity Check Complete',
subject: 'A parity check has completed successfully',
description: 'No sync errors were detected.',
importance: NotificationImportance.INFO,
});
const results = await service.getWarningsAndAlerts();
const warningMatches = results.filter(
(notification) => notification.subject === duplicateData.subject
);
const alertMatches = results.filter((notification) =>
notification.subject.includes('UPS connection')
);
expect(results.length).toEqual(2);
expect(warningMatches).toHaveLength(1);
expect(alertMatches).toHaveLength(1);
expect(
results.every((notification) => notification.importance !== NotificationImportance.INFO)
).toBe(true);
});
it('respects the provided limit', async ({ expect }) => {
const limit = 2;
await createNotification({
title: 'Array Warning',
subject: 'Disk 2 is getting warm',
description: 'Disk temperature has exceeded threshold.',
importance: NotificationImportance.WARNING,
});
await createNotification({
title: 'Network Down',
subject: 'Ethernet link is down',
description: 'Physical link failure detected.',
importance: NotificationImportance.ALERT,
});
await createNotification({
title: 'Critical Temperature',
subject: 'CPU temperature exceeded',
description: 'CPU temperature has exceeded safe operating limits.',
importance: NotificationImportance.ALERT,
});
const results = await service.getWarningsAndAlerts(limit);
expect(results.length).toEqual(limit);
});
});
describe('notifyIfUnique', () => {
const duplicateData: NotificationData = {
title: 'Docker Query Failure',
subject: 'Failed to fetch containers from Docker',
description: 'Please verify that the Docker service is running.',
importance: NotificationImportance.ALERT,
};
it('skips creating duplicate unread notifications', async ({ expect }) => {
const created = await service.notifyIfUnique(duplicateData);
expect(created).toBeDefined();
const skipped = await service.notifyIfUnique(duplicateData);
expect(skipped).toBeNull();
const notifications = await service.getNotifications({
type: NotificationType.UNREAD,
limit: 50,
offset: 0,
});
expect(
notifications.filter((notification) => notification.title === duplicateData.title)
).toHaveLength(1);
});
it('creates new notification when no duplicate exists', async ({ expect }) => {
const uniqueData: NotificationData = {
title: 'UPS Disconnected',
subject: 'UPS connection lost',
description: 'Reconnect the UPS to restore protection.',
importance: NotificationImportance.WARNING,
};
const notification = await service.notifyIfUnique(uniqueData);
expect(notification).toBeDefined();
expect(notification?.title).toEqual(uniqueData.title);
});
});
/**--------------------------------------------
* CRUD: Update Tests
*---------------------------------------------**/

View File

@@ -121,6 +121,7 @@ export class NotificationsService {
pubsub.publish(PUBSUB_CHANNEL.NOTIFICATION_ADDED, {
notificationAdded: notification,
});
void this.publishWarningsAndAlerts();
}
}
@@ -142,6 +143,20 @@ export class NotificationsService {
});
}
private async publishWarningsAndAlerts() {
try {
const warningsAndAlerts = await this.getWarningsAndAlerts();
await pubsub.publish(PUBSUB_CHANNEL.NOTIFICATION_WARNINGS_AND_ALERTS, {
notificationsWarningsAndAlerts: warningsAndAlerts,
});
} catch (error) {
this.logger.error(
'[publishWarningsAndAlerts] Failed to broadcast warnings and alerts snapshot',
error as Error
);
}
}
private increment(importance: NotificationImportance, collector: NotificationCounts) {
collector[importance.toLowerCase()] += 1;
collector['total'] += 1;
@@ -214,6 +229,8 @@ export class NotificationsService {
await writeFile(path, ini);
}
void this.publishWarningsAndAlerts();
return this.notificationFileToGqlNotification({ id, type: NotificationType.UNREAD }, fileData);
}
@@ -300,6 +317,9 @@ export class NotificationsService {
this.decrement(notification.importance, NotificationsService.overview[type.toLowerCase()]);
await this.publishOverview();
if (type === NotificationType.UNREAD) {
void this.publishWarningsAndAlerts();
}
// return both the overview & the deleted notification
// this helps us reference the deleted notification in-memory if we want
@@ -320,6 +340,10 @@ export class NotificationsService {
warning: 0,
total: 0,
};
await this.publishOverview();
if (type === NotificationType.UNREAD) {
void this.publishWarningsAndAlerts();
}
return this.getOverview();
}
@@ -433,6 +457,8 @@ export class NotificationsService {
});
await moveToArchive(notification);
void this.publishWarningsAndAlerts();
return {
...notification,
type: NotificationType.ARCHIVE,
@@ -458,6 +484,7 @@ export class NotificationsService {
});
await moveToUnread(notification);
void this.publishWarningsAndAlerts();
return {
...notification,
type: NotificationType.UNREAD,
@@ -482,6 +509,7 @@ export class NotificationsService {
});
const stats = await batchProcess(notifications, archive);
void this.publishWarningsAndAlerts();
return { ...stats, overview: overviewSnapshot };
}
@@ -504,6 +532,7 @@ export class NotificationsService {
});
const stats = await batchProcess(notifications, unArchive);
void this.publishWarningsAndAlerts();
return { ...stats, overview: overviewSnapshot };
}
@@ -567,6 +596,64 @@ export class NotificationsService {
return notifications;
}
/**
* Creates a notification only if an equivalent unread notification does not already exist.
*
* @param data The notification data to create.
* @returns The created notification, or null if a duplicate was detected.
*/
public async notifyIfUnique(data: NotificationData): Promise<Notification | null> {
const fingerprint = this.getNotificationFingerprintFromData(data);
const hasDuplicate = await this.hasUnreadNotificationWithFingerprint(fingerprint);
if (hasDuplicate) {
this.logger.verbose(
`[notifyIfUnique] Skipping notification creation for duplicate fingerprint: ${fingerprint}`
);
return null;
}
return this.createNotification(data);
}
/**
* Returns a deduplicated list of unread warning and alert notifications.
*
* Deduplication is based on the combination of importance, title, subject, description, and link.
* This ensures repeated notifications with the same user-facing content are only shown once, while
* still prioritizing the most recent occurrence of each unique notification.
*
* @param limit Maximum number of unique notifications to return. Default: 50.
*/
public async getWarningsAndAlerts(limit = 50): Promise<Notification[]> {
const notifications = await this.loadUnreadNotifications();
const deduped: Notification[] = [];
const seen = new Set<string>();
for (const notification of notifications) {
if (
notification.importance !== NotificationImportance.ALERT &&
notification.importance !== NotificationImportance.WARNING
) {
continue;
}
const key = this.getDeduplicationKey(notification);
if (seen.has(key)) {
continue;
}
seen.add(key);
deduped.push(notification);
if (deduped.length >= limit) {
break;
}
}
return deduped;
}
/**
* Given a path to a folder, returns the full (absolute) paths of the folder's top-level contents.
* Sorted latest-first by default.
@@ -787,8 +874,57 @@ export class NotificationsService {
* Helpers
*------------------------------------------------------------------------**/
private async loadUnreadNotifications(): Promise<Notification[]> {
const { UNREAD } = this.paths();
const files = await this.listFilesInFolder(UNREAD);
const [notifications] = await this.loadNotificationsFromPaths(files, {
type: NotificationType.UNREAD,
});
return notifications;
}
private async hasUnreadNotificationWithFingerprint(fingerprint: string): Promise<boolean> {
const notifications = await this.loadUnreadNotifications();
return notifications.some(
(notification) => this.getDeduplicationKey(notification) === fingerprint
);
}
private sortLatestFirst(a: Notification, b: Notification) {
const defaultTimestamp = 0;
return Number(b.timestamp ?? defaultTimestamp) - Number(a.timestamp ?? defaultTimestamp);
}
private getDeduplicationKey(notification: Notification): string {
return this.getNotificationFingerprint(notification);
}
private getNotificationFingerprintFromData(data: NotificationData): string {
return this.getNotificationFingerprint({
importance: data.importance,
title: data.title,
subject: data.subject,
description: data.description,
link: data.link,
});
}
private getNotificationFingerprint({
importance,
title,
subject,
description,
link,
}: Pick<Notification, 'importance' | 'title' | 'subject' | 'description'> & {
link?: string | null;
}): string {
const makePart = (value?: string | null) => (value ?? '').trim();
return [
importance,
makePart(title),
makePart(subject),
makePart(description),
makePart(link),
].join('|');
}
}

View File

@@ -15,8 +15,8 @@ import { InfoModule } from '@app/unraid-api/graph/resolvers/info/info.module.js'
import { LogsModule } from '@app/unraid-api/graph/resolvers/logs/logs.module.js';
import { MetricsModule } from '@app/unraid-api/graph/resolvers/metrics/metrics.module.js';
import { RootMutationsResolver } from '@app/unraid-api/graph/resolvers/mutation/mutation.resolver.js';
import { NotificationsModule } from '@app/unraid-api/graph/resolvers/notifications/notifications.module.js';
import { NotificationsResolver } from '@app/unraid-api/graph/resolvers/notifications/notifications.resolver.js';
import { NotificationsService } from '@app/unraid-api/graph/resolvers/notifications/notifications.service.js';
import { OnlineResolver } from '@app/unraid-api/graph/resolvers/online/online.resolver.js';
import { OwnerResolver } from '@app/unraid-api/graph/resolvers/owner/owner.resolver.js';
import { RCloneModule } from '@app/unraid-api/graph/resolvers/rclone/rclone.module.js';
@@ -47,6 +47,7 @@ import { MeResolver } from '@app/unraid-api/graph/user/user.resolver.js';
FlashBackupModule,
InfoModule,
LogsModule,
NotificationsModule,
RCloneModule,
SettingsModule,
SsoModule,
@@ -58,7 +59,6 @@ import { MeResolver } from '@app/unraid-api/graph/user/user.resolver.js';
FlashResolver,
MeResolver,
NotificationsResolver,
NotificationsService,
OnlineResolver,
OwnerResolver,
RegistrationResolver,

View File

@@ -22,7 +22,7 @@ describe('UPSResolver', () => {
MODEL: 'Test UPS',
STATUS: 'Online',
BCHARGE: '100',
TIMELEFT: '3600',
TIMELEFT: '60', // 60 minutes (apcupsd format)
LINEV: '120.5',
OUTPUTV: '120.5',
LOADPCT: '25',

View File

@@ -21,7 +21,8 @@ export class UPSResolver {
status: upsData.STATUS || 'Online',
battery: {
chargeLevel: parseInt(upsData.BCHARGE || '100', 10),
estimatedRuntime: parseInt(upsData.TIMELEFT || '3600', 10),
// Convert TIMELEFT from minutes (apcupsd format) to seconds
estimatedRuntime: Math.round(parseFloat(upsData.TIMELEFT || '60') * 60),
health: 'Good',
},
power: {

View File

@@ -148,6 +148,16 @@ const verifyLibvirtConnection = async (hypervisor: Hypervisor) => {
}
};
// Check if qemu-img is available before running tests
const isQemuAvailable = () => {
try {
execSync('qemu-img --version', { stdio: 'ignore' });
return true;
} catch (error) {
return false;
}
};
describe('VmsService', () => {
let service: VmsService;
let hypervisor: Hypervisor;
@@ -174,6 +184,14 @@ describe('VmsService', () => {
</domain>
`;
beforeAll(() => {
if (!isQemuAvailable()) {
throw new Error(
'QEMU not available - skipping VM integration tests. Please install QEMU to run these tests.'
);
}
});
beforeAll(async () => {
// Override the LIBVIRT_URI environment variable for testing
process.env.LIBVIRT_URI = LIBVIRT_URI;

View File

@@ -222,9 +222,15 @@ export class ResolvedOrganizerView {
@IsString()
name!: string;
@Field(() => ResolvedOrganizerEntry)
@ValidateNested()
root!: ResolvedOrganizerEntryType;
@Field()
@IsString()
rootId!: string;
@Field(() => [FlatOrganizerEntry])
@IsArray()
@ValidateNested({ each: true })
@Type(() => FlatOrganizerEntry)
flatEntries!: FlatOrganizerEntry[];
@Field(() => GraphQLJSON, { nullable: true })
@IsOptional()
@@ -246,3 +252,54 @@ export class ResolvedOrganizerV1 {
@Type(() => ResolvedOrganizerView)
views!: ResolvedOrganizerView[];
}
// ============================================
// FLAT ORGANIZER ENTRY (for efficient frontend consumption)
// ============================================
@ObjectType()
export class FlatOrganizerEntry {
@Field()
@IsString()
id!: string;
@Field()
@IsString()
type!: string;
@Field()
@IsString()
name!: string;
@Field({ nullable: true })
@IsOptional()
@IsString()
parentId?: string;
@Field()
@IsNumber()
depth!: number;
@Field()
@IsNumber()
position!: number;
@Field(() => [String])
@IsArray()
@IsString({ each: true })
path!: string[];
@Field()
hasChildren!: boolean;
@Field(() => [String])
@IsArray()
@IsString({ each: true })
childrenIds!: string[];
@Field(() => DockerContainer, { nullable: true })
@IsOptional()
@ValidateNested()
@Type(() => DockerContainer)
meta?: DockerContainer;
}

View File

@@ -4,8 +4,6 @@ import { resolveOrganizer } from '@app/unraid-api/organizer/organizer.js';
import {
OrganizerResource,
OrganizerV1,
ResolvedOrganizerEntryType,
ResolvedOrganizerFolder,
ResolvedOrganizerV1,
} from '@app/unraid-api/organizer/organizer.model.js';
@@ -72,36 +70,48 @@ describe('Organizer Resolver', () => {
const defaultView = resolved.views[0];
expect(defaultView.id).toBe('default');
expect(defaultView.name).toBe('Default View');
expect(defaultView.root.type).toBe('folder');
expect(defaultView.rootId).toBe('root-folder');
if (defaultView.root.type === 'folder') {
const rootFolder = defaultView.root as ResolvedOrganizerFolder;
expect(rootFolder.name).toBe('Root');
expect(rootFolder.children).toHaveLength(2);
// Check flatEntries structure
const flatEntries = defaultView.flatEntries;
expect(flatEntries).toHaveLength(4);
// First child should be the resolved container1
const firstChild = rootFolder.children[0];
expect(firstChild.type).toBe('container');
expect(firstChild.id).toBe('container1');
expect(firstChild.name).toBe('My Container');
// Root folder
const rootEntry = flatEntries[0];
expect(rootEntry.id).toBe('root-folder');
expect(rootEntry.type).toBe('folder');
expect(rootEntry.name).toBe('Root');
expect(rootEntry.depth).toBe(0);
expect(rootEntry.parentId).toBeUndefined();
expect(rootEntry.childrenIds).toEqual(['container1-ref', 'subfolder']);
// Second child should be the resolved subfolder
const secondChild = rootFolder.children[1];
expect(secondChild.type).toBe('folder');
if (secondChild.type === 'folder') {
const subFolder = secondChild as ResolvedOrganizerFolder;
expect(subFolder.name).toBe('Subfolder');
expect(subFolder.children).toHaveLength(1);
// First child (container1-ref resolved to container)
const container1Entry = flatEntries[1];
expect(container1Entry.id).toBe('container1-ref');
expect(container1Entry.type).toBe('container');
expect(container1Entry.name).toBe('My Container');
expect(container1Entry.depth).toBe(1);
expect(container1Entry.parentId).toBe('root-folder');
const nestedChild = subFolder.children[0];
expect(nestedChild.type).toBe('container');
expect(nestedChild.id).toBe('container2');
expect(nestedChild.name).toBe('Another Container');
}
}
// Subfolder
const subfolderEntry = flatEntries[2];
expect(subfolderEntry.id).toBe('subfolder');
expect(subfolderEntry.type).toBe('folder');
expect(subfolderEntry.name).toBe('Subfolder');
expect(subfolderEntry.depth).toBe(1);
expect(subfolderEntry.parentId).toBe('root-folder');
expect(subfolderEntry.childrenIds).toEqual(['container2-ref']);
// Nested container
const container2Entry = flatEntries[3];
expect(container2Entry.id).toBe('container2-ref');
expect(container2Entry.type).toBe('container');
expect(container2Entry.name).toBe('Another Container');
expect(container2Entry.depth).toBe(2);
expect(container2Entry.parentId).toBe('subfolder');
});
test('should throw error for missing resource', () => {
test('should handle missing resource gracefully', () => {
const organizer: OrganizerV1 = {
version: 1,
resources: {},
@@ -127,12 +137,19 @@ describe('Organizer Resolver', () => {
},
};
expect(() => resolveOrganizer(organizer)).toThrow(
"Resource with id 'nonexistent-resource' not found"
);
const resolved = resolveOrganizer(organizer);
const flatEntries = resolved.views[0].flatEntries;
// Should have 2 entries: root folder and the ref (kept as ref type since resource not found)
expect(flatEntries).toHaveLength(2);
const missingRefEntry = flatEntries[1];
expect(missingRefEntry.id).toBe('missing-ref');
expect(missingRefEntry.type).toBe('ref'); // Stays as ref when resource not found
expect(missingRefEntry.meta).toBeUndefined();
});
test('should throw error for missing entry', () => {
test('should skip missing entries gracefully', () => {
const organizer: OrganizerV1 = {
version: 1,
resources: {},
@@ -153,9 +170,12 @@ describe('Organizer Resolver', () => {
},
};
expect(() => resolveOrganizer(organizer)).toThrow(
"Entry with id 'nonexistent-entry' not found in view"
);
const resolved = resolveOrganizer(organizer);
const flatEntries = resolved.views[0].flatEntries;
// Should only have root folder, missing entry is skipped
expect(flatEntries).toHaveLength(1);
expect(flatEntries[0].id).toBe('root-folder');
});
test('should resolve empty folders correctly', () => {
@@ -207,30 +227,27 @@ describe('Organizer Resolver', () => {
const defaultView = resolved.views[0];
expect(defaultView.id).toBe('default');
expect(defaultView.name).toBe('Default View');
expect(defaultView.root.type).toBe('folder');
expect(defaultView.rootId).toBe('root');
if (defaultView.root.type === 'folder') {
const rootFolder = defaultView.root as ResolvedOrganizerFolder;
expect(rootFolder.name).toBe('Root');
expect(rootFolder.children).toHaveLength(2);
const flatEntries = defaultView.flatEntries;
expect(flatEntries).toHaveLength(3);
// First child should be the resolved container
const firstChild = rootFolder.children[0];
expect(firstChild.type).toBe('container');
expect(firstChild.id).toBe('container1');
// Root folder
expect(flatEntries[0].id).toBe('root');
expect(flatEntries[0].type).toBe('folder');
expect(flatEntries[0].name).toBe('Root');
// Second child should be the resolved empty folder
const secondChild = rootFolder.children[1];
expect(secondChild.type).toBe('folder');
expect(secondChild.id).toBe('empty-folder');
// First child - resolved container
expect(flatEntries[1].id).toBe('container1-ref');
expect(flatEntries[1].type).toBe('container');
expect(flatEntries[1].name).toBe('My Container');
if (secondChild.type === 'folder') {
const emptyFolder = secondChild as ResolvedOrganizerFolder;
expect(emptyFolder.name).toBe('Empty Folder');
expect(emptyFolder.children).toEqual([]);
expect(emptyFolder.children).toHaveLength(0);
}
}
// Second child - empty folder
expect(flatEntries[2].id).toBe('empty-folder');
expect(flatEntries[2].type).toBe('folder');
expect(flatEntries[2].name).toBe('Empty Folder');
expect(flatEntries[2].childrenIds).toEqual([]);
expect(flatEntries[2].hasChildren).toBe(false);
});
test('should handle real-world scenario with containers and empty folder', () => {
@@ -314,24 +331,19 @@ describe('Organizer Resolver', () => {
expect(resolved.views).toHaveLength(1);
const defaultView = resolved.views[0];
expect(defaultView.root.type).toBe('folder');
expect(defaultView.rootId).toBe('root');
if (defaultView.root.type === 'folder') {
const rootFolder = defaultView.root as ResolvedOrganizerFolder;
expect(rootFolder.children).toHaveLength(4);
const flatEntries = defaultView.flatEntries;
expect(flatEntries).toHaveLength(5); // root + 3 containers + empty folder
// Last child should be the empty folder (not an empty object)
const lastChild = rootFolder.children[3];
expect(lastChild).not.toEqual({}); // This should NOT be an empty object
expect(lastChild.type).toBe('folder');
expect(lastChild.id).toBe('new-folder');
if (lastChild.type === 'folder') {
const newFolder = lastChild as ResolvedOrganizerFolder;
expect(newFolder.name).toBe('new-folder');
expect(newFolder.children).toEqual([]);
}
}
// Last entry should be the empty folder (not missing or malformed)
const lastEntry = flatEntries[4];
expect(lastEntry).toBeDefined();
expect(lastEntry.type).toBe('folder');
expect(lastEntry.id).toBe('new-folder');
expect(lastEntry.name).toBe('new-folder');
expect(lastEntry.childrenIds).toEqual([]);
expect(lastEntry.hasChildren).toBe(false);
});
test('should handle nested empty folders correctly', () => {
@@ -373,31 +385,28 @@ describe('Organizer Resolver', () => {
expect(resolved.views).toHaveLength(1);
const defaultView = resolved.views[0];
expect(defaultView.root.type).toBe('folder');
expect(defaultView.rootId).toBe('root');
if (defaultView.root.type === 'folder') {
const rootFolder = defaultView.root as ResolvedOrganizerFolder;
expect(rootFolder.children).toHaveLength(1);
const flatEntries = defaultView.flatEntries;
expect(flatEntries).toHaveLength(3);
const level1Folder = rootFolder.children[0];
expect(level1Folder.type).toBe('folder');
expect(level1Folder.id).toBe('level1-folder');
// Root
expect(flatEntries[0].id).toBe('root');
expect(flatEntries[0].depth).toBe(0);
if (level1Folder.type === 'folder') {
const level1 = level1Folder as ResolvedOrganizerFolder;
expect(level1.children).toHaveLength(1);
// Level 1 folder
expect(flatEntries[1].id).toBe('level1-folder');
expect(flatEntries[1].type).toBe('folder');
expect(flatEntries[1].depth).toBe(1);
expect(flatEntries[1].parentId).toBe('root');
const level2Folder = level1.children[0];
expect(level2Folder.type).toBe('folder');
expect(level2Folder.id).toBe('level2-folder');
if (level2Folder.type === 'folder') {
const level2 = level2Folder as ResolvedOrganizerFolder;
expect(level2.children).toEqual([]);
expect(level2.children).toHaveLength(0);
}
}
}
// Level 2 folder (empty)
expect(flatEntries[2].id).toBe('level2-folder');
expect(flatEntries[2].type).toBe('folder');
expect(flatEntries[2].depth).toBe(2);
expect(flatEntries[2].parentId).toBe('level1-folder');
expect(flatEntries[2].childrenIds).toEqual([]);
expect(flatEntries[2].hasChildren).toBe(false);
});
test('should validate that all resolved objects have proper structure', () => {
@@ -443,30 +452,24 @@ describe('Organizer Resolver', () => {
const resolved: ResolvedOrganizerV1 = resolveOrganizer(organizer);
// Recursively validate that all objects have proper structure
function validateResolvedEntry(entry: ResolvedOrganizerEntryType) {
// Validate that all flat entries have proper structure
const flatEntries = resolved.views[0].flatEntries;
expect(flatEntries).toHaveLength(3); // root + container + empty folder
flatEntries.forEach((entry) => {
expect(entry).toBeDefined();
expect(entry).not.toEqual({});
expect(entry).toHaveProperty('id');
expect(entry).toHaveProperty('type');
expect(entry).toHaveProperty('name');
expect(entry).toHaveProperty('depth');
expect(entry).toHaveProperty('childrenIds');
expect(typeof entry.id).toBe('string');
expect(typeof entry.type).toBe('string');
expect(typeof entry.name).toBe('string');
if (entry.type === 'folder') {
const folder = entry as ResolvedOrganizerFolder;
expect(folder).toHaveProperty('children');
expect(Array.isArray(folder.children)).toBe(true);
// Recursively validate children
folder.children.forEach((child) => validateResolvedEntry(child));
}
}
if (resolved.views[0].root.type === 'folder') {
validateResolvedEntry(resolved.views[0].root);
}
expect(typeof entry.depth).toBe('number');
expect(Array.isArray(entry.childrenIds)).toBe(true);
});
});
test('should maintain object identity and not return empty objects', () => {
@@ -510,22 +513,19 @@ describe('Organizer Resolver', () => {
const resolved: ResolvedOrganizerV1 = resolveOrganizer(organizer);
if (resolved.views[0].root.type === 'folder') {
const rootFolder = resolved.views[0].root as ResolvedOrganizerFolder;
expect(rootFolder.children).toHaveLength(3);
const flatEntries = resolved.views[0].flatEntries;
expect(flatEntries).toHaveLength(4); // root + 3 empty folders
// Ensure none of the children are empty objects
rootFolder.children.forEach((child, index) => {
expect(child).not.toEqual({});
expect(child.type).toBe('folder');
expect(child.id).toBe(`empty${index + 1}`);
expect(child.name).toBe(`Empty ${index + 1}`);
if (child.type === 'folder') {
const folder = child as ResolvedOrganizerFolder;
expect(folder.children).toEqual([]);
}
});
}
// Ensure none of the entries are malformed
const emptyFolders = flatEntries.slice(1); // Skip root
emptyFolders.forEach((entry, index) => {
expect(entry).not.toEqual({});
expect(entry).toBeDefined();
expect(entry.type).toBe('folder');
expect(entry.id).toBe(`empty${index + 1}`);
expect(entry.name).toBe(`Empty ${index + 1}`);
expect(entry.childrenIds).toEqual([]);
expect(entry.hasChildren).toBe(false);
});
});
});

View File

@@ -1,6 +1,9 @@
import { describe, expect, it } from 'vitest';
import { addMissingResourcesToView } from '@app/unraid-api/organizer/organizer.js';
import {
addMissingResourcesToView,
removeStaleRefsFromView,
} from '@app/unraid-api/organizer/organizer.js';
import {
OrganizerFolder,
OrganizerResource,
@@ -263,4 +266,268 @@ describe('addMissingResourcesToView', () => {
expect(result.entries['key-different-from-id'].id).toBe('actual-resource-id');
expect((result.entries['root1'] as OrganizerFolder).children).toContain('key-different-from-id');
});
it("does not re-add resources to root if they're already referenced in any folder", () => {
const resources: OrganizerV1['resources'] = {
resourceA: { id: 'resourceA', type: 'container', name: 'A' },
resourceB: { id: 'resourceB', type: 'container', name: 'B' },
};
const originalView: OrganizerView = {
id: 'view1',
name: 'Test View',
root: 'root1',
entries: {
root1: {
id: 'root1',
type: 'folder',
name: 'Root',
children: ['stuff'],
},
stuff: {
id: 'stuff',
type: 'folder',
name: 'Stuff',
children: ['resourceA', 'resourceB'],
},
resourceA: { id: 'resourceA', type: 'ref', target: 'resourceA' },
resourceB: { id: 'resourceB', type: 'ref', target: 'resourceB' },
},
};
const result = addMissingResourcesToView(resources, originalView);
// Root should still only contain the 'stuff' folder, not the resources
const rootChildren = (result.entries['root1'] as OrganizerFolder).children;
expect(rootChildren).toEqual(['stuff']);
});
it('should remove stale refs when resources are removed', () => {
const resources: OrganizerV1['resources'] = {
resource1: { id: 'resource1', type: 'container', name: 'Container 1' },
// resource2 has been removed
};
const originalView: OrganizerView = {
id: 'view1',
name: 'Test View',
root: 'root1',
entries: {
root1: {
id: 'root1',
type: 'folder',
name: 'Root',
children: ['resource1', 'resource2'],
},
resource1: { id: 'resource1', type: 'ref', target: 'resource1' },
resource2: { id: 'resource2', type: 'ref', target: 'resource2' }, // stale ref
},
};
const result = addMissingResourcesToView(resources, originalView);
// resource2 should be removed from entries
expect(result.entries['resource2']).toBeUndefined();
// resource2 should be removed from root children
const rootChildren = (result.entries['root1'] as OrganizerFolder).children;
expect(rootChildren).not.toContain('resource2');
expect(rootChildren).toContain('resource1');
});
it('should remove stale refs from nested folders', () => {
const resources: OrganizerV1['resources'] = {
resource1: { id: 'resource1', type: 'container', name: 'Container 1' },
// resource2 and resource3 have been removed
};
const originalView: OrganizerView = {
id: 'view1',
name: 'Test View',
root: 'root1',
entries: {
root1: {
id: 'root1',
type: 'folder',
name: 'Root',
children: ['folder1', 'resource1'],
},
folder1: {
id: 'folder1',
type: 'folder',
name: 'Nested Folder',
children: ['resource2', 'resource3'],
},
resource1: { id: 'resource1', type: 'ref', target: 'resource1' },
resource2: { id: 'resource2', type: 'ref', target: 'resource2' }, // stale
resource3: { id: 'resource3', type: 'ref', target: 'resource3' }, // stale
},
};
const result = addMissingResourcesToView(resources, originalView);
// stale refs should be removed
expect(result.entries['resource2']).toBeUndefined();
expect(result.entries['resource3']).toBeUndefined();
// folder1 children should be empty
const folder1Children = (result.entries['folder1'] as OrganizerFolder).children;
expect(folder1Children).toEqual([]);
// resource1 should still exist
expect(result.entries['resource1']).toBeDefined();
});
});
describe('removeStaleRefsFromView', () => {
it('should remove refs pointing to non-existent resources', () => {
const resources: OrganizerV1['resources'] = {
resource1: { id: 'resource1', type: 'container', name: 'Container 1' },
};
const originalView: OrganizerView = {
id: 'view1',
name: 'Test View',
root: 'root1',
entries: {
root1: {
id: 'root1',
type: 'folder',
name: 'Root',
children: ['resource1', 'stale-ref'],
},
resource1: { id: 'resource1', type: 'ref', target: 'resource1' },
'stale-ref': { id: 'stale-ref', type: 'ref', target: 'non-existent-resource' },
},
};
const result = removeStaleRefsFromView(resources, originalView);
expect(result.entries['resource1']).toBeDefined();
expect(result.entries['stale-ref']).toBeUndefined();
expect((result.entries['root1'] as OrganizerFolder).children).toEqual(['resource1']);
});
it('should not remove folders even if empty', () => {
const resources: OrganizerV1['resources'] = {};
const originalView: OrganizerView = {
id: 'view1',
name: 'Test View',
root: 'root1',
entries: {
root1: {
id: 'root1',
type: 'folder',
name: 'Root',
children: ['folder1'],
},
folder1: {
id: 'folder1',
type: 'folder',
name: 'Empty Folder',
children: [],
},
},
};
const result = removeStaleRefsFromView(resources, originalView);
expect(result.entries['root1']).toBeDefined();
expect(result.entries['folder1']).toBeDefined();
});
it('should remove multiple stale refs from multiple folders', () => {
const resources: OrganizerV1['resources'] = {
resource1: { id: 'resource1', type: 'container', name: 'Container 1' },
};
const originalView: OrganizerView = {
id: 'view1',
name: 'Test View',
root: 'root1',
entries: {
root1: {
id: 'root1',
type: 'folder',
name: 'Root',
children: ['folder1', 'stale1'],
},
folder1: {
id: 'folder1',
type: 'folder',
name: 'Folder',
children: ['resource1', 'stale2', 'stale3'],
},
resource1: { id: 'resource1', type: 'ref', target: 'resource1' },
stale1: { id: 'stale1', type: 'ref', target: 'gone1' },
stale2: { id: 'stale2', type: 'ref', target: 'gone2' },
stale3: { id: 'stale3', type: 'ref', target: 'gone3' },
},
};
const result = removeStaleRefsFromView(resources, originalView);
expect(result.entries['resource1']).toBeDefined();
expect(result.entries['stale1']).toBeUndefined();
expect(result.entries['stale2']).toBeUndefined();
expect(result.entries['stale3']).toBeUndefined();
expect((result.entries['root1'] as OrganizerFolder).children).toEqual(['folder1']);
expect((result.entries['folder1'] as OrganizerFolder).children).toEqual(['resource1']);
});
it('should not mutate the original view', () => {
const resources: OrganizerV1['resources'] = {};
const originalView: OrganizerView = {
id: 'view1',
name: 'Test View',
root: 'root1',
entries: {
root1: {
id: 'root1',
type: 'folder',
name: 'Root',
children: ['stale-ref'],
},
'stale-ref': { id: 'stale-ref', type: 'ref', target: 'gone' },
},
};
const originalEntriesCount = Object.keys(originalView.entries).length;
const result = removeStaleRefsFromView(resources, originalView);
expect(Object.keys(originalView.entries)).toHaveLength(originalEntriesCount);
expect(originalView.entries['stale-ref']).toBeDefined();
expect(result).not.toBe(originalView);
});
it('should handle view with no refs', () => {
const resources: OrganizerV1['resources'] = {
resource1: { id: 'resource1', type: 'container', name: 'Container 1' },
};
const originalView: OrganizerView = {
id: 'view1',
name: 'Test View',
root: 'root1',
entries: {
root1: {
id: 'root1',
type: 'folder',
name: 'Root',
children: ['folder1'],
},
folder1: {
id: 'folder1',
type: 'folder',
name: 'Sub Folder',
children: [],
},
},
};
const result = removeStaleRefsFromView(resources, originalView);
expect(Object.keys(result.entries)).toHaveLength(2);
expect(result.entries['root1']).toBeDefined();
expect(result.entries['folder1']).toBeDefined();
});
});

View File

@@ -1,5 +1,7 @@
import {
AnyOrganizerResource,
FlatOrganizerEntry,
OrganizerContainerResource,
OrganizerFolder,
OrganizerResource,
OrganizerResourceRef,
@@ -45,72 +47,164 @@ export function resourceToResourceRef(
* // updatedView will contain 'res1' as a resource reference in the root folder
* ```
*/
export function addMissingResourcesToView(
/**
* Removes refs from a view that point to resources that no longer exist.
* This ensures the view stays in sync when containers are removed.
*
* @param resources - The current set of available resources
* @param originalView - The view to clean up
* @returns A new view with stale refs removed
*/
export function removeStaleRefsFromView(
resources: OrganizerV1['resources'],
originalView: OrganizerView
): OrganizerView {
const view = structuredClone(originalView);
view.entries[view.root] ??= {
id: view.root,
name: view.name,
const resourceIds = new Set(Object.keys(resources));
const staleRefIds = new Set<string>();
// Find all refs that point to non-existent resources
Object.entries(view.entries).forEach(([id, entry]) => {
if (entry.type === 'ref') {
const ref = entry as OrganizerResourceRef;
if (!resourceIds.has(ref.target)) {
staleRefIds.add(id);
}
}
});
// Remove stale refs from all folder children arrays
Object.values(view.entries).forEach((entry) => {
if (entry.type === 'folder') {
const folder = entry as OrganizerFolder;
folder.children = folder.children.filter((childId) => !staleRefIds.has(childId));
}
});
// Delete the stale ref entries themselves
for (const refId of staleRefIds) {
delete view.entries[refId];
}
return view;
}
export function addMissingResourcesToView(
resources: OrganizerV1['resources'],
originalView: OrganizerView
): OrganizerView {
// First, remove any stale refs pointing to non-existent resources
const cleanedView = removeStaleRefsFromView(resources, originalView);
cleanedView.entries[cleanedView.root] ??= {
id: cleanedView.root,
name: cleanedView.name,
type: 'folder',
children: [],
};
const root = view.entries[view.root]! as OrganizerFolder;
const root = cleanedView.entries[cleanedView.root]! as OrganizerFolder;
const rootChildren = new Set(root.children);
// Track if a resource id is already referenced in any folder
const referencedIds = new Set<string>();
Object.values(cleanedView.entries).forEach((entry) => {
if (entry.type === 'folder') {
for (const childId of entry.children) referencedIds.add(childId);
}
});
Object.entries(resources).forEach(([id, resource]) => {
if (!view.entries[id]) {
view.entries[id] = resourceToResourceRef(resource, (resource) => resource.id);
const existsInEntries = Boolean(cleanedView.entries[id]);
const isReferencedSomewhere = referencedIds.has(id);
// Ensure a ref entry exists for the resource id
if (!existsInEntries) {
cleanedView.entries[id] = resourceToResourceRef(resource, (resource) => resource.id);
}
// Only add to root if the resource is not already referenced elsewhere
if (!isReferencedSomewhere) {
rootChildren.add(id);
}
});
root.children = Array.from(rootChildren);
return view;
return cleanedView;
}
/**
* Recursively resolves an organizer entry (folder or resource ref) into its actual objects.
* This transforms the flat ID-based structure into a nested object structure for frontend convenience.
* Directly enriches flat entries from an organizer view without building an intermediate tree.
* This is more efficient than building a tree just to flatten it again.
*
* PRECONDITION: The given view is valid (ie. does not contain any cycles or depth issues).
*
* @param entryId - The ID of the entry to resolve
* @param view - The organizer view containing the entry definitions
* @param view - The flat organizer view
* @param resources - The collection of all available resources
* @returns The resolved entry with actual objects instead of ID references
* @returns Array of enriched flat organizer entries with metadata
*/
function resolveEntry(
entryId: string,
export function enrichFlatEntries(
view: OrganizerView,
resources: OrganizerV1['resources']
): ResolvedOrganizerEntryType {
const entry = view.entries[entryId];
): FlatOrganizerEntry[] {
const entries: FlatOrganizerEntry[] = [];
const parentMap = new Map<string, string>();
if (!entry) {
throw new Error(`Entry with id '${entryId}' not found in view`);
}
if (entry.type === 'folder') {
// Recursively resolve all children
const resolvedChildren = entry.children.map((childId) => resolveEntry(childId, view, resources));
return {
id: entry.id,
type: 'folder',
name: entry.name,
children: resolvedChildren,
} as ResolvedOrganizerFolder;
} else if (entry.type === 'ref') {
// Resolve the resource reference
const resource = resources[entry.target];
if (!resource) {
throw new Error(`Resource with id '${entry.target}' not found`);
// Build parent map
for (const [id, entry] of Object.entries(view.entries)) {
if (entry.type === 'folder') {
for (const childId of entry.children) {
parentMap.set(childId, id);
}
}
return resource;
}
throw new Error(`Unknown entry type: ${(entry as any).type}`);
// Walk from root to maintain order and calculate depth/position
function walk(entryId: string, depth: number, path: string[], position: number): void {
const entry = view.entries[entryId];
if (!entry) return;
const currentPath = [...path, entryId];
const isFolder = entry.type === 'folder';
const children = isFolder ? (entry as OrganizerFolder).children : [];
// Resolve resource if ref
let meta: any = undefined;
let name = entryId;
let type: string = entry.type;
if (entry.type === 'ref') {
const resource = resources[(entry as OrganizerResourceRef).target];
if (resource) {
if (resource.type === 'container') {
meta = (resource as OrganizerContainerResource).meta;
type = 'container';
}
name = resource.name;
}
} else if (entry.type === 'folder') {
name = (entry as OrganizerFolder).name;
}
entries.push({
id: entryId,
type,
name,
parentId: parentMap.get(entryId),
depth,
path: currentPath,
position,
hasChildren: isFolder && children.length > 0,
childrenIds: children,
meta,
});
if (isFolder) {
children.forEach((childId, idx) => {
walk(childId, depth + 1, currentPath, idx);
});
}
}
walk(view.root, 0, [], 0);
return entries;
}
/**
@@ -127,12 +221,13 @@ export function resolveOrganizerView(
view: OrganizerView,
resources: OrganizerV1['resources']
): ResolvedOrganizerView {
const resolvedRoot = resolveEntry(view.root, view, resources);
const flatEntries = enrichFlatEntries(view, resources);
return {
id: view.id,
name: view.name,
root: resolvedRoot,
rootId: view.root,
flatEntries,
prefs: view.prefs,
};
}
@@ -574,3 +669,108 @@ export function moveEntriesToFolder(params: MoveEntriesToFolderParams): Organize
destinationFolder.children = Array.from(destinationChildren);
return newView;
}
export interface MoveItemsToPositionParams {
view: OrganizerView;
sourceEntryIds: Set<string>;
destinationFolderId: string;
position: number;
resources?: OrganizerV1['resources'];
}
/**
* Moves entries to a specific position within a destination folder.
* Combines moveEntriesToFolder with position-based insertion.
*/
export function moveItemsToPosition(params: MoveItemsToPositionParams): OrganizerView {
const { view, sourceEntryIds, destinationFolderId, position, resources } = params;
const movedView = moveEntriesToFolder({ view, sourceEntryIds, destinationFolderId });
const folder = movedView.entries[destinationFolderId] as OrganizerFolder;
const movedIds = Array.from(sourceEntryIds);
const otherChildren = folder.children.filter((id) => !sourceEntryIds.has(id));
const insertPos = Math.max(0, Math.min(position, otherChildren.length));
const reordered = [
...otherChildren.slice(0, insertPos),
...movedIds,
...otherChildren.slice(insertPos),
];
folder.children = reordered;
return movedView;
}
export interface RenameFolderParams {
view: OrganizerView;
folderId: string;
newName: string;
}
/**
* Renames a folder by updating its name property.
* This is simpler than the current create+delete approach.
*/
export function renameFolder(params: RenameFolderParams): OrganizerView {
const { view, folderId, newName } = params;
const newView = structuredClone(view);
const entry = newView.entries[folderId];
if (!entry) {
throw new Error(`Folder with id '${folderId}' not found`);
}
if (entry.type !== 'folder') {
throw new Error(`Entry '${folderId}' is not a folder`);
}
(entry as OrganizerFolder).name = newName;
return newView;
}
export interface CreateFolderWithItemsParams {
view: OrganizerView;
folderId: string;
folderName: string;
parentId: string;
sourceEntryIds?: string[];
position?: number;
resources?: OrganizerV1['resources'];
}
/**
* Creates a new folder and optionally moves items into it at a specific position.
* Combines createFolder + moveItems + positioning in a single atomic operation.
*/
export function createFolderWithItems(params: CreateFolderWithItemsParams): OrganizerView {
const { view, folderId, folderName, parentId, sourceEntryIds = [], position, resources } = params;
let newView = createFolderInView({
view,
folderId,
folderName,
parentId,
childrenIds: sourceEntryIds,
});
if (sourceEntryIds.length > 0) {
newView = moveEntriesToFolder({
view: newView,
sourceEntryIds: new Set(sourceEntryIds),
destinationFolderId: folderId,
});
}
if (position !== undefined) {
const parent = newView.entries[parentId] as OrganizerFolder;
const withoutNewFolder = parent.children.filter((id) => id !== folderId);
const insertPos = Math.max(0, Math.min(position, withoutNewFolder.length));
parent.children = [
...withoutNewFolder.slice(0, insertPos),
folderId,
...withoutNewFolder.slice(insertPos),
];
}
return newView;
}

View File

@@ -7,6 +7,7 @@ import { basename, dirname, join } from 'path';
import { applyPatch, createPatch, parsePatch, reversePatch } from 'diff';
import { coerce, compare, gte, lte } from 'semver';
import { compareVersions } from '@app/common/compare-semver-version.js';
import { getUnraidVersion } from '@app/common/dashboard/get-unraid-version.js';
export type ModificationEffect = 'nginx:reload';
@@ -212,9 +213,11 @@ export abstract class FileModification {
}
// Default implementation that can be overridden if needed
async shouldApply(): Promise<ShouldApplyWithReason> {
async shouldApply({
checkOsVersion = true,
}: { checkOsVersion?: boolean } = {}): Promise<ShouldApplyWithReason> {
try {
if (await this.isUnraidVersionGreaterThanOrEqualTo('7.2.0')) {
if (checkOsVersion && (await this.isUnraidVersionGreaterThanOrEqualTo('7.2.0'))) {
return {
shouldApply: false,
reason: 'Patch unnecessary for Unraid 7.2 or later because the Unraid API is integrated.',
@@ -274,25 +277,7 @@ export abstract class FileModification {
throw new Error(`Failed to compare Unraid version - missing comparison version`);
}
// Special handling for prerelease versions when base versions are equal
if (includePrerelease) {
const baseUnraid = `${unraidVersion.major}.${unraidVersion.minor}.${unraidVersion.patch}`;
const baseCompared = `${comparedVersion.major}.${comparedVersion.minor}.${comparedVersion.patch}`;
if (baseUnraid === baseCompared) {
const unraidHasPrerelease = unraidVersion.prerelease.length > 0;
const comparedHasPrerelease = comparedVersion.prerelease.length > 0;
// If one has prerelease and the other doesn't, handle specially
if (unraidHasPrerelease && !comparedHasPrerelease) {
// For gte: prerelease is considered greater than stable
// For lte: prerelease is considered less than stable
return compareFn === gte;
}
}
}
return compareFn(unraidVersion, comparedVersion);
return compareVersions(unraidVersion, comparedVersion, compareFn, { includePrerelease });
}
protected async isUnraidVersionGreaterThanOrEqualTo(

View File

@@ -0,0 +1,61 @@
import { readFile } from 'node:fs/promises';
import { ENABLE_NEXT_DOCKER_RELEASE } from '@app/environment.js';
import {
FileModification,
ShouldApplyWithReason,
} from '@app/unraid-api/unraid-file-modifier/file-modification.js';
export default class DockerContainersPageModification extends FileModification {
id: string = 'docker-containers-page';
public readonly filePath: string =
'/usr/local/emhttp/plugins/dynamix.docker.manager/DockerContainers.page';
async shouldApply(): Promise<ShouldApplyWithReason> {
const baseCheck = await super.shouldApply({ checkOsVersion: false });
if (!baseCheck.shouldApply) {
return baseCheck;
}
if (!ENABLE_NEXT_DOCKER_RELEASE) {
return {
shouldApply: false,
reason: 'ENABLE_NEXT_DOCKER_RELEASE is not enabled, so Docker overview table modification is not applied',
};
}
if (await this.isUnraidVersionGreaterThanOrEqualTo('7.3.0')) {
return {
shouldApply: true,
reason: 'Docker overview table WILL BE integrated in Unraid 7.3 or later. This modification is a temporary measure for testing.',
};
}
return {
shouldApply: false,
reason: 'Docker overview table modification is disabled for Unraid < 7.3',
};
}
protected async generatePatch(overridePath?: string): Promise<string> {
const fileContent = await readFile(this.filePath, 'utf-8');
const newContent = this.applyToSource();
return this.createPatchWithDiff(overridePath ?? this.filePath, fileContent, newContent);
}
private applyToSource(): string {
return `Menu="Docker:1"
Title="Docker Containers"
Tag="cubes"
Cond="is_file('/var/run/dockerd.pid')"
Markdown="false"
Nchan="docker_load"
Tabs="false"
---
<div class="unapi">
<unraid-docker-container-overview></unraid-docker-container-overview>
</div>
`;
}
}

View File

@@ -127,6 +127,13 @@ export class UnraidFileModificationService
this.logger.debug(
`Skipping modification: ${modification.id} - ${shouldApplyWithReason.reason}`
);
// Check if there's a leftover patch from a previous run that needs to be rolled back
try {
await modification.rollback(true);
this.logger.log(`Rolled back previously applied modification: ${modification.id}`);
} catch {
// No patch file exists or rollback failed - this is expected when the modification was never applied
}
}
} catch (error) {
if (error instanceof Error) {

View File

@@ -1,13 +1,13 @@
{
"name": "unraid-monorepo",
"private": true,
"version": "4.27.2",
"version": "4.29.0",
"scripts": {
"build": "pnpm -r build",
"build:watch": "pnpm -r --parallel --filter '!@unraid/ui' build:watch",
"codegen": "pnpm -r codegen",
"i18n:extract": "pnpm --filter @unraid/api i18n:extract && pnpm --filter @unraid/web i18n:extract",
"dev": "pnpm -r dev",
"dev": "pnpm -r --parallel dev",
"unraid:deploy": "pnpm -r unraid:deploy",
"test": "pnpm -r test",
"test:watch": "pnpm -r --parallel test:watch",

View File

@@ -18,7 +18,8 @@
"dist"
],
"scripts": {
"build": "rimraf dist && tsc --project tsconfig.build.json",
"build": "pnpm clean && tsc --project tsconfig.build.json",
"clean": "rimraf dist",
"prepare": "npm run build",
"test": "vitest run",
"test:watch": "vitest",

View File

@@ -13,9 +13,11 @@ export enum GRAPHQL_PUBSUB_CHANNEL {
NOTIFICATION = "NOTIFICATION",
NOTIFICATION_ADDED = "NOTIFICATION_ADDED",
NOTIFICATION_OVERVIEW = "NOTIFICATION_OVERVIEW",
NOTIFICATION_WARNINGS_AND_ALERTS = "NOTIFICATION_WARNINGS_AND_ALERTS",
OWNER = "OWNER",
SERVERS = "SERVERS",
VMS = "VMS",
DOCKER_STATS = "DOCKER_STATS",
LOG_FILE = "LOG_FILE",
PARITY = "PARITY",
}

Some files were not shown because too many files have changed in this diff Show More