diff --git a/docs/ocis/migration.md b/docs/ocis/migration.md
index 12583248ac..46a7362715 100644
--- a/docs/ocis/migration.md
+++ b/docs/ocis/migration.md
@@ -119,10 +119,10 @@ _Feel free to add your question as a PR to this document using the link at the t
### Stage 3: introduce oCIS interally
-Befor letting oCIS handle end user requests we will first make it available in the internal network. By subsequently adding services we can add functionality and verify the services work as intended.
+Before letting oCIS handle end user requests we will first make it available in the internal network. By subsequently adding services we can add functionality and verify the services work as intended.
Start oCIS backend and make read only tests on existing data using the `owncloudsql` storage driver which will read (and write)
-- blobs from the same datadirectory layout as in ownCloud 10
+- blobs from the same data directory layout as in ownCloud 10
- metadata from the ownCloud 10 database:
The oCIS share manager will read share information from the ownCloud database using an `owncloud` driver as well.
@@ -139,11 +139,11 @@ None, only administrators will be able to explore oCIS during this stage.
#### Steps and verifications
-We are going to run and explore a series of services that will together handle the same requests as ownCloud 10. For initial exploration the oCIS binary is recommended. The services can later be deployed using a single oCIS runtime or in multiple cotainers.
+We are going to run and explore a series of services that will together handle the same requests as ownCloud 10. For initial exploration the oCIS binary is recommended. The services can later be deployed using a single oCIS runtime or in multiple containers.
##### Storage provider for file metadata
-1. Deploy OCIS storage provider with owncloudsql driver.
+1. Deploy OCIS storage provider with the `owncloudsql` driver.
2. Set `read_only: true` in the storage provider config.
_TODO @butonic currently, the migration selector will use the `ocis` policy for users that have been added to the accounts service. IMO we need to evaluate a claim from the IdP._
+1. Change the routing policy for a user or an early adopters group to `ocis` _TODO @butonic currently, the migration selector will use the `ocis` policy for users that have been added to the accounts service. IMO we need to evaluate a claim from the IdP._
2. Verify the requests are routed based on the oCIS routing policy `oc10` for 'migrated' users.
At this point you are ready to rock & roll!
@@ -340,8 +340,7 @@ _TODO @butonic we need a canary app that allows users to decide for themself whi
@@ -352,7 +351,29 @@ _Feel free to add your question as a PR to this document using the link at the t
-### Stage-7: shut down ownCloud 10
+### Stage-7: introduce spaces using ocis
+To encourage users to switch you can promote the workspaces feature that is built into oCIS. The ownCloud 10 storage backend can be used for existing users. New users and group or project spaces can be provided by storage providers that better suit the underlying storage system.
+
+#### Steps
+First, the admin needs to
+- deploy a storage provider with the storage driver that best fits the underlying storage system and requirements.
+- register the storage in the storage registry with a new storage id (we recommend a uuid).
+
+Then a user with the necessary create storage space role can create a storage space and assign Managers.
+
+
+
+_TODO @butonic a user with management permission needs to be presented with a list of storage spaces where he can see the amount of free space and decide on which storage provider the storage space should be created. For now a config option for the default storage provider for a specific type might be good enough._
+
+
+
+#### Verification
+The new storage space should show up in the `/graph/drives` endpoint for the managers and the creator of the space.
+
+#### Notes
+Depending on the requirements and acceptable tradeoffs, a database less deployment using the ocis or s3ng storage driver is possible. There is also a [cephfs driver](https://github.com/cs3org/reva/pull/1209) on the way, that directly works on the API level instead of POSIX.
+
+### Stage-8: shut down ownCloud 10
Disable ownCloud 10 in the proxy, all requests are now handled by oCIS, shut down oc10 web servers and redis (or keep for calendar & contacts only? rip out files from oCIS?)
#### User impact
@@ -387,7 +408,7 @@ _Feel free to add your question as a PR to this document using the link at the t
-### Stage 8: storage migration
+### Stage 9: storage migration
To get rid of the database we will move the metadata from the old ownCloud 10 database into dedicated storage providers. This can happen in a user by user fashion. group drives can properly be migrated to group, project or workspaces in this stage.
#### User impact
@@ -401,12 +422,12 @@ Noticeable performance improvements because we effectively shard the storage log
_TODO @butonic implement `ownclouds3` based on `s3ng`_
_TODO @butonic implement tiered storage provider for seamless migration_
-_TODO @butonic document how to manually do that until the storge registry can discover that on its own._
+_TODO @butonic document how to manually do that until the storage registry can discover that on its own._
#### Verification
-Start with a test user, then move to early adoptors and finally migrate all users.
+Start with a test user, then move to early adopters and finally migrate all users.
#### Rollback
To switch the storage provider again the same storage space migration can be performed again: copy medatata and blob data using the CS3 api, then change the responsible storage provider in the storage registry.
@@ -426,13 +447,13 @@ _Feel free to add your question as a PR to this document using the link at the t
@@ -465,8 +486,8 @@ To switch the share manager to the database one revert routing users to the new
-### Stage-10
-Profit! Well, on the one hand you do not need to maintain a clustered database setup and can rely on the storage system. On the other hand you are now in microservice wonderland and will have to relearn how to identify bottlenecks and scale oCIS accordingly. The good thing is that tools like jaeger and prometheus have evolved and will help you understand what is going on. But this is a different Topic. See you on the other side!
+### Stage-11
+Profit! Well, on the one hand you do not need to maintain a clustered database setup and can rely on the storage system. On the other hand you are now in microservice wonderland and will have to relearn how to identify bottlenecks and scale oCIS accordingly. The good thing is that tools like jaeger and prometheus have evolved and will help you understand what is going on. But this is a different topic. See you on the other side!
#### FAQ
_Feel free to add your question as a PR to this document using the link at the top of this page!_
diff --git a/docs/ocis/storage-backends/cephfs.md b/docs/ocis/storage-backends/cephfs.md
new file mode 100644
index 0000000000..9cc19ffd52
--- /dev/null
+++ b/docs/ocis/storage-backends/cephfs.md
@@ -0,0 +1,32 @@
+---
+title: "cephfs"
+date: 2021-09-13T15:36:00+01:00
+weight: 30
+geekdocRepo: https://github.com/owncloud/ocis
+geekdocEditPath: edit/master/docs/ocis/storage-backends/
+geekdocFilePath: cephfs.md
+---
+
+{{< toc >}}
+
+oCIS intends to make the aspects of existing storage systems available as transparently as possible, but the static sync algorithm of the desktop client relies on some form of recursive change time propagation on the server side to detect changes. While this can be bolted on top of existing file systems with inotify, the kernel audit or a fuse based overlay filesystem, a storage system that already implements this aspect is preferable. Aside from EOS, cephfs supports a recursive change time that oCIS can use to calculate an etag for the webdav API.
+
+## Development
+
+The cephfs development happens in a [reva branch](https://github.com/cs3org/reva/pull/1209) and is currently driven by CERN.
+
+## Architecture
+
+In the original approach the driver was based on the localfs driver, relying on a locally mounted cephfs. It would interface with it using the POSIX apis. This has been changed to direct Ceph API access using https://github.com/ceph/go-ceph. It allows using the ceph admin APIs to create subvolumes for user homes and maintain a file id to path mapping using symlinks.
+
+It also uses the `.snap` folder built into Ceph to provide versions.
+
+Trash is not implemented, as cephfs has no native recycle bin.
+
+## Future work
+- The spaces concept matches subvolumes, implement the CreateStorageSpace call with that, keep track of the list of storage spaces using symlings, like for the id based lookup
+- The Share manager needs a persistence layer.
+ - currently we persist using a json file. An sqlite db would be more robust.
+ - As it basically provides two lists, *shared with me* and *shared with others* we could persist this directly on cephfs!
+ - To allow deprovisioning a user the data should by sharded by userid.
+ - Backups are then done using snapshots.
\ No newline at end of file