mirror of
https://github.com/opencloud-eu/opencloud.git
synced 2025-12-30 17:00:57 -06:00
Various docs edits
This commit is contained in:
@@ -540,7 +540,7 @@ curl -L -X PATCH 'https://localhost:9200/graph/v1.0/drives/storage-users-1$535aa
|
||||
|
||||
{{< hint type=info title="Body value" >}}
|
||||
|
||||
This request needs an empty body (--data-raw '{}') to fulfil the standard libregraph specificiation even when the body is not needed.
|
||||
This request needs an empty body (--data-raw '{}') to fulfil the standard libregraph specification even when the body is not needed.
|
||||
|
||||
{{< /hint >}}
|
||||
{{< /tab >}}
|
||||
|
||||
@@ -7,7 +7,7 @@ geekdocEditPath: edit/master/docs/architecture
|
||||
geekdocFilePath: upload-processing.md
|
||||
---
|
||||
|
||||
Uploads are handled by a dedicated service that uses TUS.io for rusumable uploads. When all bytes have been transferred the upload is finalized by making the file available in file listings and for download.
|
||||
Uploads are handled by a dedicated service that uses TUS.io for resumable uploads. When all bytes have been transferred the upload is finalized by making the file available in file listings and for download.
|
||||
|
||||
The finalization may be asynchronous when mandatory workflow steps are involved.
|
||||
|
||||
@@ -29,7 +29,7 @@ sequenceDiagram
|
||||
storageprovider-->>-ocdav: OK, Protocol simple, UploadEndpoint: /data, Token: {jwt}
|
||||
Note right of ocdav: The {jwt} contains the internal actual target, eg.:<br>http://localhost:9158/data/simple/91cc9882-db71-4b37-b694-a522850fcee1
|
||||
ocdav->>+dataprovider: PUT /data<br>X-Reva-Transfer: {jwt}
|
||||
dataprovider-->>-ocdav: 201 Created
|
||||
dataprovider-->>-ocdav: 201 Created
|
||||
ocdav-->>-Client: 201 Created
|
||||
|
||||
{{</mermaid>}}
|
||||
@@ -97,7 +97,7 @@ sequenceDiagram
|
||||
dataprovider-)nats: emit all-bytes-received event
|
||||
nats-)processing: all-bytes-received({uploadid}) event
|
||||
Note over dataprovider: TODO: A lot of time may pass here, we could use<br> the `Prefer: respond-async` header to return early<br>with a 202 Accepted status and a Location header<br>to a websocket endpoint
|
||||
alt success
|
||||
alt success
|
||||
processing-)nats: emit processing-finished({uploadid}) event
|
||||
nats-)dataprovider: processing-finished({uploadid}) event
|
||||
dataprovider-->>-datagateway: 204 No Content<br>TUS-Resumable: 1.0.0<br>Upload-Offset: 363976
|
||||
@@ -118,4 +118,4 @@ sequenceDiagram
|
||||
## Async TUS upload with postprocessing
|
||||
This might be a TUS extension or a misunderstanding on our side of what tus can do for us. Clients should send a `Prefer: respond-async` header to allow the server to return early when postprocessing might take longer. The PATCH requests can then return status `202 Accepted` and a `Location` header to a websocket that clients can use to track the processing / upload progress.
|
||||
|
||||
TODO there is a conflict with the TUS.io POST request with the creation extension, as that also returns a `Location` header which carries the upload URL. We would need another header to transport the websocket location. Maybe `Websocket-Location` or `Progress-Location`?
|
||||
TODO there is a conflict with the TUS.io POST request with the creation extension, as that also returns a `Location` header which carries the upload URL. We would need another header to transport the websocket location. Maybe `Websocket-Location` or `Progress-Location`?
|
||||
|
||||
@@ -35,7 +35,7 @@ oidc-gen \
|
||||
|
||||
If you have dynamic client registration enabled on your OpenID Connect identity provider, you can skip the `--client-id`, `--client-secret` and `--pub` options.
|
||||
|
||||
If you're using a dedicated OpenID Connect client for the OIDC-agent, we recommend a public one with the following two redirect URIs: `http://127.0.0.1:*` and `http://localhost:*`. Alternatively you also may use the already existing OIDC client of the ownCloud Desktop Client (`--client-id=xdXOt13JKxym1B1QcEncf2XDkLAexMBFwiT9j6EfhhHFJhs2KM9jbjTmf8JBXE69` and `--client-secret=UBntmLjC2yYCeHwsyj73Uwo9TAaecAetRwMw0xYcvNL9yRdLSUi0hUAHfvCHFeFh`, no `--pub` set, request specific scope for oofline access), e.g.:
|
||||
If you're using a dedicated OpenID Connect client for the OIDC-agent, we recommend a public one with the following two redirect URIs: `http://127.0.0.1:*` and `http://localhost:*`. Alternatively you also may use the already existing OIDC client of the ownCloud Desktop Client (`--client-id=xdXOt13JKxym1B1QcEncf2XDkLAexMBFwiT9j6EfhhHFJhs2KM9jbjTmf8JBXE69` and `--client-secret=UBntmLjC2yYCeHwsyj73Uwo9TAaecAetRwMw0xYcvNL9yRdLSUi0hUAHfvCHFeFh`, no `--pub` set, request specific scope for offline access), e.g.:
|
||||
``` bash
|
||||
oidc-gen /
|
||||
--client-id=xdXOt13JKxym1B1QcEncf2XDkLAexMBFwiT9j6EfhhHFJhs2KM9jbjTmf8JBXE69 \
|
||||
|
||||
@@ -40,7 +40,7 @@ The following is valid for envvars and yaml files related to the doc process:
|
||||
* When filing a pull request in the ocis master branch relating to docs, CI runs `make docs-generate` and copies the result into the `docs` branch of ocis. This branch is then taken as base for owncloud.dev and as reference for the [admin docs](https://doc.owncloud.com/ocis/next/).
|
||||
* When running `make docs-generate` locally, the same output is created as above but it stays in the same branch where the make command was issued.
|
||||
|
||||
In both cases, `make docs-generate` removes files in the target folder `_includes` to avoid remnants. All content is recreated.
|
||||
In both cases, `make docs-generate` removes files in the target folder `_includes` to avoid remnants. All content is recreated.
|
||||
|
||||
On a side note (unrelated to the `docs` branch), [deployment examples](https://github.com/owncloud/ocis/tree/master/deployments/examples) have their own branch related to an ocis stable version to keep the state consistent, which is necessary for the admin documentation.
|
||||
|
||||
@@ -71,7 +71,7 @@ Global envvars are gathered by checking if the envvar is available in more than
|
||||
|
||||
### General Extended Envvars Info
|
||||
|
||||
"Extended" envvars are variables that need to be present *before* the core or services are starting up as they depend on the info provided like path for config files etc. Therefore they are _not_ bound to services like other envvars.
|
||||
"Extended" envvars are variables that need to be present *before* the core or services are starting up as they depend on the info provided like path for config files etc. Therefore they are _not_ bound to services like other envvars.
|
||||
|
||||
It can happen that extended envvars are found but do not need to be published as they are for internal use only. Those envvars can be defined to be ignored for further processing.
|
||||
|
||||
@@ -81,7 +81,7 @@ IMPORTANT:
|
||||
|
||||
- Because extended envvars do not have the same structural setup as "normal" envvars (like type, description or defaults), this info needs to be provided manually once - even if found multiple times. Any change of this info will be noticed during the next CI run, the corresponding adoc file generated, changes transported to the docs branch and published in the next admin docs build.
|
||||
|
||||
- The identification if an envvar is in the yaml file already present is made via the `rawname` and the `path` identifyer which includes the line number. If there is a change in the source file shifting line numbers, new items will get added and the old ones not touched. Though technically ok, this can cause confusion to identify which items have a correct path reference. To get rid of items with wrong line numbers, correct the existing ones, especially the one containing the description and which is marked to be shown. Only items that have a real line number match need to be present, orphanded items can safely be deleted. You can double check valid items by creating a dummy branch, delete the `extended_vars.yaml` and run `make docs-generate` to regenerate the file having only items with valid path references.
|
||||
- The identification if an envvar is in the yaml file already present is made via the `rawname` and the `path` identifier which includes the line number. If there is a change in the source file shifting line numbers, new items will get added and the old ones not touched. Though technically ok, this can cause confusion to identify which items have a correct path reference. To get rid of items with wrong line numbers, correct the existing ones, especially the one containing the description and which is marked to be shown. Only items that have a real line number match need to be present, orphaned items can safely be deleted. You can double-check valid items by creating a dummy branch, delete the `extended_vars.yaml` and run `make docs-generate` to regenerate the file having only items with valid path references.
|
||||
|
||||
- Do not change the sort order of extended envvar blocks as they are automatically reordered alphabetically.
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ This ADR deals with a prerequisite for service authorization: service accounts.
|
||||
|
||||
Some services need access to file content without a user being logged in. We currently pass the owner or manager
|
||||
of a space in events which allows the search service to impersonate that user to extract metadata from the changed resource.
|
||||
There are two problems with this:
|
||||
There are two problems with this:
|
||||
1. The service could get all permissions of the user and gain write permission
|
||||
2. There is a race condition where the user in the event might no longer have read permission, causing the index to go stale
|
||||
|
||||
@@ -71,14 +71,14 @@ To authenticate service accounts the static reva auth registry needs to be confi
|
||||
* Bad, because we have to write code to manage service accounts or at least filter them out in the admin ui
|
||||
|
||||
|
||||
### Impersonate Space-Owners
|
||||
### Impersonate Space-Owners
|
||||
|
||||
We could implement a new auth manager that can authenticate space owners, a CS3 user type we introduced for project spaces which 'have no owner', only one or more managers.
|
||||
|
||||
* Good, because it reuses the space owner user type
|
||||
* Bad, because the space owner always has write permisson
|
||||
* Bad, because the space owner always has write permission
|
||||
* Bad, because we don't know if a there are places in the code that try to look up a user with USER_TYPE_SPACE_OWNER at the cs3 users service ... they might not exist there ... or do we have to implement a userregistry, similar to the authregistry?
|
||||
* Bad, because it feels like another hack and does not protect against compromized services that try to execute operations that the user did not consent to.
|
||||
* Bad, because it feels like another hack and does not protect against compromised services that try to execute operations that the user did not consent to.
|
||||
|
||||
## Links
|
||||
|
||||
@@ -92,4 +92,4 @@ Another example would be a `Resource.Read` check for a specific resource. Normal
|
||||
|
||||
In the storage drive implementation we can check the ACLs first (which would allow service accounts that are known to the underlying storage system, e.g. EOS to access the resource) and then make a call to the permissions service. At least for the Read Resource permission. Other permission checks can be introduced as needed.
|
||||
|
||||
The permission names and constraints are different from the MS Graph API. Giving permission like [`Files.ReadWrite.All`](https://learn.microsoft.com/en-us/graph/permissions-reference#user-permissions) a different meaning, depending on the type of user (for normal users it means all files they have access to, for service accounts it means all files in the organization) is a source of confusion which only gets worse when there are two different UUIDs for this.
|
||||
The permission names and constraints are different from the MS Graph API. Giving permission like [`Files.ReadWrite.All`](https://learn.microsoft.com/en-us/graph/permissions-reference#user-permissions) a different meaning, depending on the type of user (for normal users it means all files they have access to, for service accounts it means all files in the organization) is a source of confusion which only gets worse when there are two different UUIDs for this.
|
||||
|
||||
@@ -4,7 +4,7 @@ date: 2022-06-14T16:00:00+02:00
|
||||
weight: 5
|
||||
geekdocRepo: https://github.com/owncloud/ocis
|
||||
geekdocEditPath: edit/master/docs/ocis/guides
|
||||
geekdocFilePath: ocis-and-conatiners.md
|
||||
geekdocFilePath: ocis-and-containers.md
|
||||
geekdocCollapseSection: true
|
||||
---
|
||||
|
||||
|
||||
@@ -189,7 +189,7 @@ This command is handy to run specific commands inside your service. Try `docker
|
||||
|
||||
### Persist data, restart and logging
|
||||
|
||||
The key to a successful container setup is the persistance of the application data to make the data survive a re-boot. Docker normally uses [volumes](https://docs.docker.com/storage/volumes/) for this purpose. A volume can either be a "named volume" which are completely managed by docker and have many advantages (see the linked docker documentation), or "bind mounts" which are uing the directory structure and OS of the host system. In our example we already use a bind mount for the config file. We will now add a named volume for the oCIS data directory.
|
||||
The key to a successful container setup is the persistence of the application data to make the data survive a re-boot. Docker normally uses [volumes](https://docs.docker.com/storage/volumes/) for this purpose. A volume can either be a "named volume" which are completely managed by docker and have many advantages (see the linked docker documentation), or "bind mounts" which are using the directory structure and OS of the host system. In our example we already use a bind mount for the config file. We will now add a named volume for the oCIS data directory.
|
||||
|
||||
This is the way we should configure the ocis service:
|
||||
|
||||
@@ -229,7 +229,7 @@ Now let us configure the restart policy and the logging settings for the ocis se
|
||||
# you can switch to the "local" log driver which does rotation by default
|
||||
logging:
|
||||
driver: local
|
||||
# otherwise you could specify log rotation exlicitely
|
||||
# otherwise you could specify log rotation explicitly
|
||||
# driver: "json-file" # this is the default driver
|
||||
# options:
|
||||
# max-size: "200k" # limit the size of the log file
|
||||
@@ -304,7 +304,7 @@ services:
|
||||
# you can switch to the "local" log driver which does rotation by default
|
||||
logging:
|
||||
driver: local
|
||||
# otherwise you could specify log rotation exlicitely
|
||||
# otherwise you could specify log rotation explicitly
|
||||
# driver: "json-file" # this is the default driver
|
||||
# options:
|
||||
# max-size: "200k" # limit the size of the log file
|
||||
|
||||
@@ -29,7 +29,7 @@ The antivirus service currently supports [ICAP](https://tools.ietf.org/html/rfc3
|
||||
|
||||
#### Maximum Scan size
|
||||
|
||||
Several factors can make it necessary to limit the maximum filesize the antivirus service will use for scanning. Use the `ANTIVIRUS_MAX_SCAN_SIZE` environment variable to scan only a given amount of bytes. Obviously, it is recommended to scan the whole file, but several factors like scanner type and version, bandwith, performance issues, etc. might make a limit necessary.
|
||||
Several factors can make it necessary to limit the maximum filesize the antivirus service will use for scanning. Use the `ANTIVIRUS_MAX_SCAN_SIZE` environment variable to scan only a given amount of bytes. Obviously, it is recommended to scan the whole file, but several factors like scanner type and version, bandwidth, performance issues, etc. might make a limit necessary.
|
||||
|
||||
#### Infected File Handling
|
||||
|
||||
|
||||
@@ -17,11 +17,11 @@ The `eventhistory` consumes all events from the configured event system like NAT
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Running the eventhistory service without an event sytem like NATS is not possible.
|
||||
Running the eventhistory service without an event system like NATS is not possible.
|
||||
|
||||
## Consuming
|
||||
|
||||
The `eventhistory` services consumes all events from the configured event sytem.
|
||||
The `eventhistory` services consumes all events from the configured event system.
|
||||
|
||||
## Storing
|
||||
|
||||
@@ -31,10 +31,10 @@ The `eventhistory` service stores each consumed event via the configured store i
|
||||
- `redis`: Stores data in a configured redis cluster.
|
||||
- `etcd`: Stores data in a configured etcd cluster.
|
||||
- `nats-js`: Stores data using key-value-store feature of [nats jetstream](https://docs.nats.io/nats-concepts/jetstream/key-value-store)
|
||||
- `noop`: Stores nothing. Useful for testing. Not recommended in productive enviroments.
|
||||
- `noop`: Stores nothing. Useful for testing. Not recommended in production environments.
|
||||
|
||||
1. Note that in-memory stores are by nature not reboot persistent.
|
||||
2. Though usually not necessary, a database name and a database table can be configured for event stores if the event store supports this. Generally not applicapable for stores of type `in-memory`. These settings are blank by default which means that the standard settings of the configured store applies.
|
||||
2. Though usually not necessary, a database name and a database table can be configured for event stores if the event store supports this. Generally not applicable for stores of type `in-memory`. These settings are blank by default which means that the standard settings of the configured store applies.
|
||||
3. Events stay in the store for 2 weeks by default. Use `EVENTHISTORY_RECORD_EXPIRY` to adjust this value.
|
||||
4. The eventhistory service can be scaled if not using `in-memory` stores and the stores are configured identically over all instances.
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ The OCS endpoint implements the open collaboration services API in a backwards c
|
||||
|
||||
Aggregating share information is one of the most time consuming operations in OCIS. The service fetches a list of either received or created shares and has to stat every resource individually. While stats are fast, the default behavior scales linearly with the number of shares.
|
||||
|
||||
To save network trips the sharing implementation can cache the stat requests with an in memory cache or in redis. It will shorten the response time by the network rountrip overhead at the cost of the API only eventually being updated.
|
||||
To save network trips the sharing implementation can cache the stat requests with an in memory cache or in redis. It will shorten the response time by the network round-trip overhead at the cost of the API only eventually being updated.
|
||||
|
||||
Setting `FRONTEND_OCS_RESOURCE_INFO_CACHE_TTL=60` would cache the stat info for 60 seconds. Increasing this value makes sense for large deployments with thousands of active users that keep the cache up to date. Low frequency usage scenarios should not expect a noticeable improvement.
|
||||
|
||||
@@ -34,4 +34,4 @@ The archiver endpoint provides bundled downloads of multiple files and folders.
|
||||
|
||||
## Appprovider
|
||||
|
||||
The appprovider endpoint is used to manage available apps that can be used to open different file types.
|
||||
The appprovider endpoint is used to manage available apps that can be used to open different file types.
|
||||
|
||||
@@ -43,7 +43,7 @@ To activate the policies service for a module, it must be started with a yaml co
|
||||
|
||||
When using async post-processing via the postprocessing service, the value `policies` must be added to the `POSTPROCESSING_STEPS` configuration in the order in which the evaluation should take place. Example: First check if a file contains questionable content via policies. If it looks okay, continue to check for viruses.
|
||||
|
||||
For configuration examples, the [Example Policies](#example-policies) from below are used.
|
||||
For configuration examples, the [Example Policies](#example-policies) from below are used.
|
||||
|
||||
## Modules
|
||||
|
||||
@@ -99,7 +99,7 @@ proxy:
|
||||
query: data.proxy.granted
|
||||
```
|
||||
|
||||
The same can be achieved by setting the following evironment variable:
|
||||
The same can be achieved by setting the following environment variable:
|
||||
|
||||
```yaml
|
||||
PROXY_POLICIES_QUERY=data.proxy.granted
|
||||
@@ -113,13 +113,13 @@ policies:
|
||||
query: data.postprocessing.granted
|
||||
```
|
||||
|
||||
The same can be achieved by setting the following evironment variable:
|
||||
The same can be achieved by setting the following environment variable:
|
||||
|
||||
```yaml
|
||||
POLICIES_POSTPROCESSING_QUERY=data.postprocessing.granted
|
||||
```
|
||||
|
||||
As soon as that query is configured, the postprocessing service must be informed to use the policies step by setting the environment variable:
|
||||
As soon as that query is configured, the postprocessing service must be informed to use the policies step by setting the environment variable:
|
||||
|
||||
```yaml
|
||||
POSTPROCESSING_STEPS=policies
|
||||
@@ -129,7 +129,7 @@ Note that additional steps can be configured and their position in the list defi
|
||||
|
||||
## Rego Key Match
|
||||
|
||||
To identify available keys for OPA, you need to look at [engine.go](https://github.com/owncloud/ocis/blob/master/services/policies/pkg/engine/engine.go) and the [policies.swagger.json](https://github.com/owncloud/ocis/blob/master/protogen/gen/ocis/services/policies/v0/policies.swagger.json) file. Note that which keys are avaialble depends from which module it is used.
|
||||
To identify available keys for OPA, you need to look at [engine.go](https://github.com/owncloud/ocis/blob/master/services/policies/pkg/engine/engine.go) and the [policies.swagger.json](https://github.com/owncloud/ocis/blob/master/protogen/gen/ocis/services/policies/v0/policies.swagger.json) file. Note that which keys are available depends on from which module it is used.
|
||||
|
||||
## Example Policies
|
||||
|
||||
|
||||
@@ -23,17 +23,17 @@ To use the postprocessing service, an event system needs to be configured for al
|
||||
|
||||
The storageprovider service (`storage-users`) can be configured to initiate asynchronous postprocessing by setting the `STORAGE_USERS_OCIS_ASYNC_UPLOADS` environment variable to `true`. If this is the case, postprocessing will get initiated *after* uploading a file and all bytes have been received.
|
||||
|
||||
The `postprocessing` service will then coordinate configured postprocessing steps like scanning the file for viruses. During postprocessing, the file will be in a `processing state` where only a limited set of actions are available. Note that this processing state excludes file accessability by users.
|
||||
The `postprocessing` service will then coordinate configured postprocessing steps like scanning the file for viruses. During postprocessing, the file will be in a `processing state` where only a limited set of actions are available. Note that this processing state excludes file accessibility by users.
|
||||
|
||||
When all postprocessing steps have completed successfully, the file will be made accessible for users.
|
||||
|
||||
## Additional Prerequisites for the Postprocessing Service
|
||||
|
||||
When postprocessing has been enabled, configuring any postprocessing step will require the requested services to be enabled and pre-configured. For example, to use the `virusscan` step, one needs to have an enabled and configured `antivirus` service.
|
||||
When postprocessing has been enabled, configuring any postprocessing step will require the requested services to be enabled and pre-configured. For example, to use the `virusscan` step, one needs to have an enabled and configured `antivirus` service.
|
||||
|
||||
## Postprocessing Steps
|
||||
|
||||
The postporcessing service is individually configurable. This is achieved by allowing a list of postprocessing steps that are processed in order of their appearance in the `POSTPROCESSING_STEPS` envvar. This envvar expects a comma separated list of steps that will be executed. Currently known steps to the system are `virusscan` and `delay`. Custom steps can be added but need an existing target for processing.
|
||||
The postprocessing service is individually configurable. This is achieved by allowing a list of postprocessing steps that are processed in order of their appearance in the `POSTPROCESSING_STEPS` envvar. This envvar expects a comma separated list of steps that will be executed. Currently known steps to the system are `virusscan` and `delay`. Custom steps can be added but need an existing target for processing.
|
||||
|
||||
### Virus Scanning
|
||||
|
||||
@@ -41,7 +41,7 @@ To enable virus scanning as a postprocessing step after uploading a file, the en
|
||||
|
||||
### Delay
|
||||
|
||||
Though this is for development purposes only and NOT RECOMMENDED on production systems, setting the environment variable `POSTPROCESSING_DELAY` to a duration not equal to zero will add a delay step with the configured amount of time. ocis will continue postprocessing the file after the configured delay. Use the enviroment variable `POSTPROCESSING_STEPS` and the keyword `delay` if you have multiple postprocessing steps and want to define their order. If `POSTPROCESSING_DELAY` is set but the keyword `delay` is not contained in `POSTPROCESSING_STEPS`, it will be processed as last postprocessing step without being listed there. In this case, a log entry will be written on service startup to notify the admin about that situation. That log entry can be avoided by adding the keyword `delay` to `POSTPROCESSING_STEPS`.
|
||||
Though this is for development purposes only and NOT RECOMMENDED on production systems, setting the environment variable `POSTPROCESSING_DELAY` to a duration not equal to zero will add a delay step with the configured amount of time. ocis will continue postprocessing the file after the configured delay. Use the environment variable `POSTPROCESSING_STEPS` and the keyword `delay` if you have multiple postprocessing steps and want to define their order. If `POSTPROCESSING_DELAY` is set but the keyword `delay` is not contained in `POSTPROCESSING_STEPS`, it will be processed as last postprocessing step without being listed there. In this case, a log entry will be written on service startup to notify the admin about that situation. That log entry can be avoided by adding the keyword `delay` to `POSTPROCESSING_STEPS`.
|
||||
|
||||
### Custom Postprocessing Steps
|
||||
By using the envvar `POSTPROCESSING_STEPS`, custom postprocessing steps can be added. Any word can be used as step name but be careful not to conflict with exising keywords like `virusscan` and `delay`. In addition, if a keyword is misspelled or the corresponding service does either not exist or does not follow the necessary event communication, the postprocessing service will wait forever getting the required response to proceed and does not continue any other processing.
|
||||
@@ -50,8 +50,8 @@ By using the envvar `POSTPROCESSING_STEPS`, custom postprocessing steps can be a
|
||||
For using custom postprocessing steps you need a custom service listening to the configured event system (see `General Prerequisites`)
|
||||
|
||||
#### Workflow
|
||||
When setting a custom postprocessing step (eg. `"customstep"`) the postprocessing service will eventually sent an event during postprocessing. The event will be of type `StartPostprocessingStep` with its field `StepToStart` set to `"customstep"`. When the custom service receives this event it can savely execute its actions, postprocessing service will wait until it has finished its work. The event contains further information (filename, executing user, size, ...) and also required tokens and urls to download the file in case byte inspection is necessary.
|
||||
When setting a custom postprocessing step (eg. `"customstep"`) the postprocessing service will eventually sent an event during postprocessing. The event will be of type `StartPostprocessingStep` with its field `StepToStart` set to `"customstep"`. When the custom service receives this event it can safely execute its actions, postprocessing service will wait until it has finished its work. The event contains further information (filename, executing user, size, ...) and also required tokens and urls to download the file in case byte inspection is necessary.
|
||||
|
||||
Once the custom service has finished its work, it should sent an event of type `PostprocessingFinished` via the configured events system. This event needs to contain a `FinishedStep` field set to `"customstep"`. It also must contain the outcome of the step, which can be one of "delete" (abort postprocessing, delete the file), "abort" (abort postprocessing, keep the file) and "continue" (continue postprocessing, this is the success case).
|
||||
|
||||
See the [cs3 org](https://github.com/cs3org/reva/blob/edge/pkg/events/postprocessing.go) for up-to-date information of reserved step names and event definitons.
|
||||
See the [cs3 org](https://github.com/cs3org/reva/blob/edge/pkg/events/postprocessing.go) for up-to-date information of reserved step names and event definitions.
|
||||
|
||||
@@ -9,7 +9,7 @@ geekdocCollapseSection: true
|
||||
|
||||
## Abstract
|
||||
|
||||
The proxy service is an API-Gateway for the ownCloud Infinite Scale microservices. Every HTTP request goes through this service. Authentication, logging and other preprocessing of requests also happens here. Mechanisms like request rate limitting or intrusion prevention are **not** included in the proxy service and must be setup in front like with an external reverse proxy.
|
||||
The proxy service is an API-Gateway for the ownCloud Infinite Scale microservices. Every HTTP request goes through this service. Authentication, logging and other preprocessing of requests also happens here. Mechanisms like request rate limiting or intrusion prevention are **not** included in the proxy service and must be setup in front like with an external reverse proxy.
|
||||
|
||||
The proxy service is the only service communicating to the outside and needs therefore usual protections against DDOS, Slow Loris or other attack vectors. All other services are not exposed to the outside, but also need protective measures when it comes to distributed setups like when using container orchestration over various physical servers.
|
||||
|
||||
|
||||
@@ -27,10 +27,10 @@ The `userlog` service persists information via the configured store in `USERLOG_
|
||||
- `redis`: Stores data in a configured redis cluster.
|
||||
- `etcd`: Stores data in a configured etcd cluster.
|
||||
- `nats-js`: Stores data using key-value-store feature of [nats jetstream](https://docs.nats.io/nats-concepts/jetstream/key-value-store)
|
||||
- `noop`: Stores nothing. Useful for testing. Not recommended in productive enviroments.
|
||||
- `noop`: Stores nothing. Useful for testing. Not recommended in production environments.
|
||||
|
||||
1. Note that in-memory stores are by nature not reboot persistent.
|
||||
2. Though usually not necessary, a database name and a database table can be configured for event stores if the event store supports this. Generally not applicapable for stores of type `in-memory`. These settings are blank by default which means that the standard settings of the configured store applies.
|
||||
2. Though usually not necessary, a database name and a database table can be configured for event stores if the event store supports this. Generally not applicable for stores of type `in-memory`. These settings are blank by default which means that the standard settings of the configured store applies.
|
||||
3. The userlog service can be scaled if not using `in-memory` stores and the stores are configured identically over all instances.
|
||||
|
||||
## Configuring
|
||||
|
||||
Reference in New Issue
Block a user