update reva to v2.17.0

This commit is contained in:
Michael Barz
2023-12-12 19:18:18 +01:00
parent 382d8b170d
commit b3528ac5b3
28 changed files with 2486 additions and 9 deletions

View File

@@ -0,0 +1,76 @@
Enhancement: Update reva to v2.17.0
Changelog for reva 2.17.0 (2023-12-12)
=======================================
The following sections list the changes in reva 2.17.0 relevant to
reva users. The changes are ordered by importance.
* Bugfix [cs3org/reva#4278](https://github.com/cs3org/reva/pull/4278): Disable DEPTH infinity in PROPFIND
* Bugfix [cs3org/reva#4318](https://github.com/cs3org/reva/pull/4318): Do not allow moves between shares
* Bugfix [cs3org/reva#4290](https://github.com/cs3org/reva/pull/4290): Prevent panic when trying to move a non-existent file
* Bugfix [cs3org/reva#4241](https://github.com/cs3org/reva/pull/4241): Allow an empty credentials chain in the auth middleware
* Bugfix [cs3org/reva#4216](https://github.com/cs3org/reva/pull/4216): Fix an error message
* Bugfix [cs3org/reva#4324](https://github.com/cs3org/reva/pull/4324): Fix capabilities decoding
* Bugfix [cs3org/reva#4267](https://github.com/cs3org/reva/pull/4267): Fix concurrency issue
* Bugfix [cs3org/reva#4362](https://github.com/cs3org/reva/pull/4362): Fix concurrent lookup
* Bugfix [cs3org/reva#4336](https://github.com/cs3org/reva/pull/4336): Fix definition of "file-editor" role
* Bugfix [cs3org/reva#4302](https://github.com/cs3org/reva/pull/4302): Fix checking of filename length
* Bugfix [cs3org/reva#4366](https://github.com/cs3org/reva/pull/4366): Fix CS3 status code when looking up non existing share
* Bugfix [cs3org/reva#4299](https://github.com/cs3org/reva/pull/4299): Fix HTTP verb of the generate-invite endpoint
* Bugfix [cs3org/reva#4249](https://github.com/cs3org/reva/pull/4249): GetUserByClaim not working with MSAD for claim "userid"
* Bugfix [cs3org/reva#4217](https://github.com/cs3org/reva/pull/4217): Fix missing case for "hide" in UpdateShares
* Bugfix [cs3org/reva#4140](https://github.com/cs3org/reva/pull/4140): Fix missing etag in shares jail
* Bugfix [cs3org/reva#4229](https://github.com/cs3org/reva/pull/4229): Fix destroying the Personal and Project spaces data
* Bugfix [cs3org/reva#4193](https://github.com/cs3org/reva/pull/4193): Fix overwrite a file with an empty file
* Bugfix [cs3org/reva#4365](https://github.com/cs3org/reva/pull/4365): Fix create public share
* Bugfix [cs3org/reva#4380](https://github.com/cs3org/reva/pull/4380): Fix the public link update
* Bugfix [cs3org/reva#4250](https://github.com/cs3org/reva/pull/4250): Fix race condition
* Bugfix [cs3org/reva#4345](https://github.com/cs3org/reva/pull/4345): Fix conversion of custom ocs permissions to roles
* Bugfix [cs3org/reva#4134](https://github.com/cs3org/reva/pull/4134): Fix share jail
* Bugfix [cs3org/reva#4335](https://github.com/cs3org/reva/pull/4335): Fix public shares cleanup config
* Bugfix [cs3org/reva#4338](https://github.com/cs3org/reva/pull/4338): Fix unlock via space API
* Bugfix [cs3org/reva#4341](https://github.com/cs3org/reva/pull/4341): Fix spaceID in meta endpoint response
* Bugfix [cs3org/reva#4351](https://github.com/cs3org/reva/pull/4351): Fix 500 when open public link
* Bugfix [cs3org/reva#4352](https://github.com/cs3org/reva/pull/4352): Fix the tgz mime type
* Bugfix [cs3org/reva#4388](https://github.com/cs3org/reva/pull/4388): Allow UpdateUserShare() to update just the expiration date
* Bugfix [cs3org/reva#4214](https://github.com/cs3org/reva/pull/4214): Always pass adjusted default nats options
* Bugfix [cs3org/reva#4291](https://github.com/cs3org/reva/pull/4291): Release lock when expired
* Bugfix [cs3org/reva#4386](https://github.com/cs3org/reva/pull/4386): Remove dead enable_home config
* Bugfix [cs3org/reva#4292](https://github.com/cs3org/reva/pull/4292): Return 403 when user is not permitted to lock
* Enhancement [cs3org/reva#4389](https://github.com/cs3org/reva/pull/4389): Add audio and location props
* Enhancement [cs3org/reva#4337](https://github.com/cs3org/reva/pull/4337): Check permissions before creating shares
* Enhancement [cs3org/reva#4326](https://github.com/cs3org/reva/pull/4326): Add search mediatype filter
* Enhancement [cs3org/reva#4367](https://github.com/cs3org/reva/pull/4367): Add GGS mime type
* Enhancement [cs3org/reva#4194](https://github.com/cs3org/reva/pull/4194): Add hide flag to shares
* Enhancement [cs3org/reva#4358](https://github.com/cs3org/reva/pull/4358): Add default permissions capability for links
* Enhancement [cs3org/reva#4133](https://github.com/cs3org/reva/pull/4133): Add more metadata to locks
* Enhancement [cs3org/reva#4353](https://github.com/cs3org/reva/pull/4353): Add support for .docxf files
* Enhancement [cs3org/reva#4363](https://github.com/cs3org/reva/pull/4363): Add nats-js-kv store
* Enhancement [cs3org/reva#4197](https://github.com/cs3org/reva/pull/4197): Add the Banned-Passwords List
* Enhancement [cs3org/reva#4190](https://github.com/cs3org/reva/pull/4190): Add the password policies
* Enhancement [cs3org/reva#4384](https://github.com/cs3org/reva/pull/4384): Add a retry postprocessing outcome and event
* Enhancement [cs3org/reva#4271](https://github.com/cs3org/reva/pull/4271): Add search capability
* Enhancement [cs3org/reva#4119](https://github.com/cs3org/reva/pull/4119): Add sse event
* Enhancement [cs3org/reva#4392](https://github.com/cs3org/reva/pull/4392): Add additional permissions to service accounts
* Enhancement [cs3org/reva#4344](https://github.com/cs3org/reva/pull/4344): Add url extension to mime type list
* Enhancement [cs3org/reva#4372](https://github.com/cs3org/reva/pull/4372): Add validation to the public share provider
* Enhancement [cs3org/reva#4244](https://github.com/cs3org/reva/pull/4244): Allow listing reveived shares by service accounts
* Enhancement [cs3org/reva#4129](https://github.com/cs3org/reva/pull/4129): Auto-Accept Shares through ServiceAccounts
* Enhancement [cs3org/reva#4374](https://github.com/cs3org/reva/pull/4374): Handle trashbin file listings concurrently
* Enhancement [cs3org/reva#4325](https://github.com/cs3org/reva/pull/4325): Enforce Permissions
* Enhancement [cs3org/reva#4368](https://github.com/cs3org/reva/pull/4368): Extract log initialization
* Enhancement [cs3org/reva#4375](https://github.com/cs3org/reva/pull/4375): Introduce UploadSessionLister interface
* Enhancement [cs3org/reva#4268](https://github.com/cs3org/reva/pull/4268): Implement sharing roles
* Enhancement [cs3org/reva#4160](https://github.com/cs3org/reva/pull/4160): Improve utils pkg
* Enhancement [cs3org/reva#4335](https://github.com/cs3org/reva/pull/4335): Add sufficient permissions check function
* Enhancement [cs3org/reva#4281](https://github.com/cs3org/reva/pull/4281): Port OCM changes from master
* Enhancement [cs3org/reva#4270](https://github.com/cs3org/reva/pull/4270): Opt out of public link password enforcement
* Enhancement [cs3org/reva#4181](https://github.com/cs3org/reva/pull/4181): Change the variable names for the password policy
* Enhancement [cs3org/reva#4256](https://github.com/cs3org/reva/pull/4256): Rename hidden share variable name
* Enhancement [cs3org/reva#3926](https://github.com/cs3org/reva/pull/3926): Service Accounts
* Enhancement [cs3org/reva#4359](https://github.com/cs3org/reva/pull/4359): Update go-ldap to v3.4.6
* Enhancement [cs3org/reva#4170](https://github.com/cs3org/reva/pull/4170): Update password policies
* Enhancement [cs3org/reva#4232](https://github.com/cs3org/reva/pull/4232): Improve error handling in utils package
https://github.com/owncloud/ocis/pull/7949

6
go.mod
View File

@@ -13,7 +13,7 @@ require (
github.com/coreos/go-oidc v2.2.1+incompatible
github.com/coreos/go-oidc/v3 v3.9.0
github.com/cs3org/go-cs3apis v0.0.0-20231023073225-7748710e0781
github.com/cs3org/reva/v2 v2.16.1-0.20231212124908-ab6ed782de28
github.com/cs3org/reva/v2 v2.17.0
github.com/dhowden/tag v0.0.0-20230630033851-978a0926ee25
github.com/disintegration/imaging v1.6.2
github.com/dutchcoders/go-clamd v0.0.0-20170520113014-b970184f4d9e
@@ -162,6 +162,7 @@ require (
github.com/containerd/cgroups/v3 v3.0.2 // indirect
github.com/coreos/go-semver v0.3.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cornelk/hashmap v1.0.8 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/crewjam/httperr v0.2.0 // indirect
github.com/crewjam/saml v0.4.14 // indirect
@@ -193,6 +194,7 @@ require (
github.com/go-logr/logr v1.3.0 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-micro/plugins/v4/events/natsjs v1.2.2-0.20230807070816-bc05fb076ce7 // indirect
github.com/go-micro/plugins/v4/store/nats-js-kv v0.0.0-00010101000000-000000000000 // indirect
github.com/go-micro/plugins/v4/store/redis v1.2.1-0.20230510195111-07cd57e1bc9d // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
@@ -346,3 +348,5 @@ require (
)
replace github.com/go-micro/plugins/v4/store/nats-js => github.com/kobergj/plugins/v4/store/nats-js v1.2.1-0.20231020092801-9463c820c19a
replace github.com/go-micro/plugins/v4/store/nats-js-kv => github.com/kobergj/plugins/v4/store/nats-js-kv v0.0.0-20231207143248-4d424e3ae348

8
go.sum
View File

@@ -1005,6 +1005,8 @@ github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cornelk/hashmap v1.0.8 h1:nv0AWgw02n+iDcawr5It4CjQIAcdMMKRrs10HOJYlrc=
github.com/cornelk/hashmap v1.0.8/go.mod h1:RfZb7JO3RviW/rT6emczVuC/oxpdz4UsSB2LJSclR1k=
github.com/cpu/goacmedns v0.1.1/go.mod h1:MuaouqEhPAHxsbqjgnck5zeghuwBP1dLnPoobeGqugQ=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
@@ -1017,8 +1019,8 @@ github.com/crewjam/saml v0.4.14 h1:g9FBNx62osKusnFzs3QTN5L9CVA/Egfgm+stJShzw/c=
github.com/crewjam/saml v0.4.14/go.mod h1:UVSZCf18jJkk6GpWNVqcyQJMD5HsRugBPf4I1nl2mME=
github.com/cs3org/go-cs3apis v0.0.0-20231023073225-7748710e0781 h1:BUdwkIlf8IS2FasrrPg8gGPHQPOrQ18MS1Oew2tmGtY=
github.com/cs3org/go-cs3apis v0.0.0-20231023073225-7748710e0781/go.mod h1:UXha4TguuB52H14EMoSsCqDj7k8a/t7g4gVP+bgY5LY=
github.com/cs3org/reva/v2 v2.16.1-0.20231212124908-ab6ed782de28 h1:IhBjtl4F/aAUdbpfjWOy1jwzrh1wLOH50UToPPOqJy8=
github.com/cs3org/reva/v2 v2.16.1-0.20231212124908-ab6ed782de28/go.mod h1:zcrrYVsBv/DwhpyO2/W5hoSZ/k6az6Z2EYQok65uqZY=
github.com/cs3org/reva/v2 v2.17.0 h1:cp7WXY+mZGLie4CKvIe3K+D/wG3sKVYrZJfs9Qnzioo=
github.com/cs3org/reva/v2 v2.17.0/go.mod h1:9hmBNVK+RSMSupWci9MQLmmj1NsJ8Bv49tqKbxMdxJY=
github.com/cyberdelia/templates v0.0.0-20141128023046-ca7fffd4298c/go.mod h1:GyV+0YP4qX0UQ7r2MoYZ+AvYDp12OF5yg4q8rGnyNh4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
@@ -1593,6 +1595,8 @@ github.com/klauspost/cpuid/v2 v2.1.0 h1:eyi1Ad2aNJMW95zcSbmGg7Cg6cq3ADwLpMAP96d8
github.com/klauspost/cpuid/v2 v2.1.0/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
github.com/kobergj/plugins/v4/store/nats-js v1.2.1-0.20231020092801-9463c820c19a h1:W+itvdTLFGLuFh+E5IzW08n2BS02cHK91qnMo7SUxbA=
github.com/kobergj/plugins/v4/store/nats-js v1.2.1-0.20231020092801-9463c820c19a/go.mod h1:wt51O2yNmgF/F7E00IYIH0awseRGqtnmjZGn6RjbZSk=
github.com/kobergj/plugins/v4/store/nats-js-kv v0.0.0-20231207143248-4d424e3ae348 h1:Czv6AW9Suj6npWd5BLZjobdD78c2RdzBeKBgkq3jYZk=
github.com/kobergj/plugins/v4/store/nats-js-kv v0.0.0-20231207143248-4d424e3ae348/go.mod h1:Goi4eJ9SrKkxE6NsAVqBVNxfQFbwb7UbyII6743ldgM=
github.com/kolo/xmlrpc v0.0.0-20200310150728-e0350524596b/go.mod h1:o03bZfuBwAXHetKXuInt4S7omeXUu62/A845kiycsSQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=

6
vendor/github.com/cornelk/hashmap/.codecov.yml generated vendored Normal file
View File

@@ -0,0 +1,6 @@
coverage:
status:
project:
default:
target: 70%
threshold: 5%

14
vendor/github.com/cornelk/hashmap/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,14 @@
*.exe
.idea
.vscode
*.iml
*.local
/*.log
*.out
*.prof
*.test
.DS_Store
*.dmp
*.db
.testCoverage

68
vendor/github.com/cornelk/hashmap/.golangci.yml generated vendored Normal file
View File

@@ -0,0 +1,68 @@
run:
deadline: 5m
linters:
enable:
- asasalint # check for pass []any as any in variadic func(...any)
- asciicheck # Simple linter to check that your code does not contain non-ASCII identifiers
- bidichk # Checks for dangerous unicode character sequences
- containedctx # detects struct contained context.Context field
- contextcheck # check the function whether use a non-inherited context
- cyclop # checks function and package cyclomatic complexity
- decorder # check declaration order and count of types, constants, variables and functions
- depguard # Go linter that checks if package imports are in a list of acceptable packages
- dogsled # Checks assignments with too many blank identifiers (e.g. x, _, _, _, := f())
- durationcheck # check for two durations multiplied together
- errcheck # checking for unchecked errors
- errname # Checks that errors are prefixed with the `Err` and error types are suffixed with the `Error`
- errorlint # finds code that will cause problems with the error wrapping scheme introduced in Go 1.13
- exportloopref # checks for pointers to enclosing loop variables
- funlen # Tool for detection of long functions
- gci # controls golang package import order and makes it always deterministic
- gocognit # Computes and checks the cognitive complexity of functions
- gocritic # Provides diagnostics that check for bugs, performance and style issues
- gocyclo # Computes and checks the cyclomatic complexity of functions
- godot # Check if comments end in a period
- goerr113 # Golang linter to check the errors handling expressions
- gosimple # Linter for Go source code that specializes in simplifying a code
- govet # reports suspicious constructs, such as Printf calls with wrong arguments
- ineffassign # Detects when assignments to existing variables are not used
- maintidx # measures the maintainability index of each function
- makezero # Finds slice declarations with non-zero initial length
- misspell # Finds commonly misspelled English words in comments
- nakedret # Finds naked returns in functions
- nestif # Reports deeply nested if statements
- nilerr # Finds the code that returns nil even if it checks that the error is not nil
- nilnil # Checks that there is no simultaneous return of `nil` error and an invalid value
- prealloc # Finds slice declarations that could potentially be preallocated
- predeclared # find code that shadows one of Go's predeclared identifiers
- revive # drop-in replacement of golint
- staticcheck # drop-in replacement of go vet
- stylecheck # Stylecheck is a replacement for golint
- tenv # detects using os.Setenv instead of t.Setenv
- thelper # checks the consistency of test helpers
- tparallel # detects inappropriate usage of t.Parallel()
- typecheck # parses and type-checks Go code
- unconvert # Remove unnecessary type conversions
- unparam # Reports unused function parameters
- unused # Checks Go code for unused constants, variables, functions and types
- usestdlibvars # detect the possibility to use variables/constants from the Go standard library
- wastedassign # finds wasted assignment statements
- whitespace # detects leading and trailing whitespace
linters-settings:
cyclop:
max-complexity: 15
gocritic:
disabled-checks:
- newDeref
govet:
disable:
- unsafeptr
issues:
exclude-use-default: false
exclude-rules:
- linters:
- goerr113
text: "do not define dynamic errors"

201
vendor/github.com/cornelk/hashmap/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright cornelk
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

25
vendor/github.com/cornelk/hashmap/Makefile generated vendored Normal file
View File

@@ -0,0 +1,25 @@
help: ## show help, shown by default if no target is specified
@grep -E '^[0-9a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
lint: ## run code linters
golangci-lint run
benchmark: ## run benchmarks
cd benchmarks && perflock go test -cpu 8 -run=^# -bench=.
benchmark-perflock: ## run benchmarks using perflock - https://github.com/aclements/perflock
cd benchmarks && perflock -governor 80% go test -count 3 -cpu 8 -run=^# -bench=.
test: ## run tests
go test -race ./...
GOARCH=386 go test ./...
test-coverage: ## run unit tests and create test coverage
go test ./... -coverprofile .testCoverage -covermode=atomic -coverpkg=./...
test-coverage-web: test-coverage ## run unit tests and show test coverage in browser
go tool cover -func .testCoverage | grep total | awk '{print "Total coverage: "$$3}'
go tool cover -html=.testCoverage
install-linters: ## install all used linters
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $$(go env GOPATH)/bin v1.49.0

88
vendor/github.com/cornelk/hashmap/README.md generated vendored Normal file
View File

@@ -0,0 +1,88 @@
# hashmap
[![Build status](https://github.com/cornelk/hashmap/actions/workflows/go.yaml/badge.svg?branch=main)](https://github.com/cornelk/hashmap/actions)
[![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/cornelk/hashmap)
[![Go Report Card](https://goreportcard.com/badge/github.com/cornelk/hashmap)](https://goreportcard.com/report/github.com/cornelk/hashmap)
[![codecov](https://codecov.io/gh/cornelk/hashmap/branch/main/graph/badge.svg?token=NS5UY28V3A)](https://codecov.io/gh/cornelk/hashmap)
## Overview
A Golang lock-free thread-safe HashMap optimized for fastest read access.
It is not a general-use HashMap and currently has slow write performance for write heavy uses.
The minimal supported Golang version is 1.19 as it makes use of Generics and the new atomic package helpers.
## Usage
Example uint8 key map uses:
```
m := New[uint8, int]()
m.Set(1, 123)
value, ok := m.Get(1)
```
Example string key map uses:
```
m := New[string, int]()
m.Set("amount", 123)
value, ok := m.Get("amount")
```
Using the map to count URL requests:
```
m := New[string, *int64]()
var i int64
counter, _ := m.GetOrInsert("api/123", &i)
atomic.AddInt64(counter, 1) // increase counter
...
count := atomic.LoadInt64(counter) // read counter
```
## Benchmarks
Reading from the hash map for numeric key types in a thread-safe way is faster than reading from a standard Golang map
in an unsafe way and four times faster than Golang's `sync.Map`:
```
BenchmarkReadHashMapUint-8 1774460 677.3 ns/op
BenchmarkReadHaxMapUint-8 1758708 679.0 ns/op
BenchmarkReadGoMapUintUnsafe-8 1497732 790.9 ns/op
BenchmarkReadGoMapUintMutex-8 41562 28672 ns/op
BenchmarkReadGoSyncMapUint-8 454401 2646 ns/op
```
Reading from the map while writes are happening:
```
BenchmarkReadHashMapWithWritesUint-8 1388560 859.1 ns/op
BenchmarkReadHaxMapWithWritesUint-8 1306671 914.5 ns/op
BenchmarkReadGoSyncMapWithWritesUint-8 335732 3113 ns/op
```
Write performance without any concurrent reads:
```
BenchmarkWriteHashMapUint-8 54756 21977 ns/op
BenchmarkWriteGoMapMutexUint-8 83907 14827 ns/op
BenchmarkWriteGoSyncMapUint-8 16983 70305 ns/op
```
The benchmarks were run with Golang 1.19.0 on Linux and AMD64 using `make benchmark`.
## Technical details
* Technical design decisions have been made based on benchmarks that are stored in an external repository:
[go-benchmark](https://github.com/cornelk/go-benchmark)
* The library uses a sorted linked list and a slice as an index into that list.
* The Get() function contains helper functions that have been inlined manually until the Golang compiler will inline them automatically.
* It optimizes the slice access by circumventing the Golang size check when reading from the slice.
Once a slice is allocated, the size of it does not change.
The library limits the index into the slice, therefore the Golang size check is obsolete.
When the slice reaches a defined fill rate, a bigger slice is allocated and all keys are recalculated and transferred into the new slice.
* For hashing, specialized xxhash implementations are used that match the size of the key type where available

12
vendor/github.com/cornelk/hashmap/defines.go generated vendored Normal file
View File

@@ -0,0 +1,12 @@
package hashmap
// defaultSize is the default size for a map.
const defaultSize = 8
// maxFillRate is the maximum fill rate for the slice before a resize will happen.
const maxFillRate = 50
// support all numeric and string types and aliases of those.
type hashable interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr | ~float32 | ~float64 | ~string
}

348
vendor/github.com/cornelk/hashmap/hashmap.go generated vendored Normal file
View File

@@ -0,0 +1,348 @@
// Package hashmap provides a lock-free and thread-safe HashMap.
package hashmap
import (
"bytes"
"fmt"
"reflect"
"strconv"
"sync/atomic"
"unsafe"
)
// Map implements a read optimized hash map.
type Map[Key hashable, Value any] struct {
hasher func(Key) uintptr
store atomic.Pointer[store[Key, Value]] // pointer to a map instance that gets replaced if the map resizes
linkedList *List[Key, Value] // key sorted linked list of elements
// resizing marks a resizing operation in progress.
// this is using uintptr instead of atomic.Bool to avoid using 32 bit int on 64 bit systems
resizing atomic.Uintptr
}
// New returns a new map instance.
func New[Key hashable, Value any]() *Map[Key, Value] {
return NewSized[Key, Value](defaultSize)
}
// NewSized returns a new map instance with a specific initialization size.
func NewSized[Key hashable, Value any](size uintptr) *Map[Key, Value] {
m := &Map[Key, Value]{}
m.allocate(size)
m.setDefaultHasher()
return m
}
// SetHasher sets a custom hasher.
func (m *Map[Key, Value]) SetHasher(hasher func(Key) uintptr) {
m.hasher = hasher
}
// Len returns the number of elements within the map.
func (m *Map[Key, Value]) Len() int {
return m.linkedList.Len()
}
// Get retrieves an element from the map under given hash key.
func (m *Map[Key, Value]) Get(key Key) (Value, bool) {
hash := m.hasher(key)
for element := m.store.Load().item(hash); element != nil; element = element.Next() {
if element.keyHash == hash && element.key == key {
return element.Value(), true
}
if element.keyHash > hash {
return *new(Value), false
}
}
return *new(Value), false
}
// GetOrInsert returns the existing value for the key if present.
// Otherwise, it stores and returns the given value.
// The returned bool is true if the value was loaded, false if stored.
func (m *Map[Key, Value]) GetOrInsert(key Key, value Value) (Value, bool) {
hash := m.hasher(key)
var newElement *ListElement[Key, Value]
for {
for element := m.store.Load().item(hash); element != nil; element = element.Next() {
if element.keyHash == hash && element.key == key {
actual := element.Value()
return actual, true
}
if element.keyHash > hash {
break
}
}
if newElement == nil { // allocate only once
newElement = &ListElement[Key, Value]{
key: key,
keyHash: hash,
}
newElement.value.Store(&value)
}
if m.insertElement(newElement, hash, key, value) {
return value, false
}
}
}
// FillRate returns the fill rate of the map as a percentage integer.
func (m *Map[Key, Value]) FillRate() int {
store := m.store.Load()
count := int(store.count.Load())
l := len(store.index)
return (count * 100) / l
}
// Del deletes the key from the map and returns whether the key was deleted.
func (m *Map[Key, Value]) Del(key Key) bool {
hash := m.hasher(key)
store := m.store.Load()
element := store.item(hash)
for ; element != nil; element = element.Next() {
if element.keyHash == hash && element.key == key {
m.deleteElement(element)
m.linkedList.Delete(element)
return true
}
if element.keyHash > hash {
return false
}
}
return false
}
// Insert sets the value under the specified key to the map if it does not exist yet.
// If a resizing operation is happening concurrently while calling Insert, the item might show up in the map
// after the resize operation is finished.
// Returns true if the item was inserted or false if it existed.
func (m *Map[Key, Value]) Insert(key Key, value Value) bool {
hash := m.hasher(key)
var (
existed, inserted bool
element *ListElement[Key, Value]
)
for {
store := m.store.Load()
searchStart := store.item(hash)
if !inserted { // if retrying after insert during grow, do not add to list again
element, existed, inserted = m.linkedList.Add(searchStart, hash, key, value)
if existed {
return false
}
if !inserted {
continue // a concurrent add did interfere, try again
}
}
count := store.addItem(element)
currentStore := m.store.Load()
if store != currentStore { // retry insert in case of insert during grow
continue
}
if m.isResizeNeeded(store, count) && m.resizing.CompareAndSwap(0, 1) {
go m.grow(0, true)
}
return true
}
}
// Set sets the value under the specified key to the map. An existing item for this key will be overwritten.
// If a resizing operation is happening concurrently while calling Set, the item might show up in the map
// after the resize operation is finished.
func (m *Map[Key, Value]) Set(key Key, value Value) {
hash := m.hasher(key)
for {
store := m.store.Load()
searchStart := store.item(hash)
element, added := m.linkedList.AddOrUpdate(searchStart, hash, key, value)
if !added {
continue // a concurrent add did interfere, try again
}
count := store.addItem(element)
currentStore := m.store.Load()
if store != currentStore { // retry insert in case of insert during grow
continue
}
if m.isResizeNeeded(store, count) && m.resizing.CompareAndSwap(0, 1) {
go m.grow(0, true)
}
return
}
}
// Grow resizes the map to a new size, the size gets rounded up to next power of 2.
// To double the size of the map use newSize 0.
// This function returns immediately, the resize operation is done in a goroutine.
// No resizing is done in case of another resize operation already being in progress.
func (m *Map[Key, Value]) Grow(newSize uintptr) {
if m.resizing.CompareAndSwap(0, 1) {
go m.grow(newSize, true)
}
}
// String returns the map as a string, only hashed keys are printed.
func (m *Map[Key, Value]) String() string {
buffer := bytes.NewBufferString("")
buffer.WriteRune('[')
first := m.linkedList.First()
item := first
for item != nil {
if item != first {
buffer.WriteRune(',')
}
fmt.Fprint(buffer, item.keyHash)
item = item.Next()
}
buffer.WriteRune(']')
return buffer.String()
}
// Range calls f sequentially for each key and value present in the map.
// If f returns false, range stops the iteration.
func (m *Map[Key, Value]) Range(f func(Key, Value) bool) {
item := m.linkedList.First()
for item != nil {
value := item.Value()
if !f(item.key, value) {
return
}
item = item.Next()
}
}
func (m *Map[Key, Value]) allocate(newSize uintptr) {
m.linkedList = NewList[Key, Value]()
if m.resizing.CompareAndSwap(0, 1) {
m.grow(newSize, false)
}
}
func (m *Map[Key, Value]) isResizeNeeded(store *store[Key, Value], count uintptr) bool {
l := uintptr(len(store.index)) // l can't be 0 as it gets initialized in New()
fillRate := (count * 100) / l
return fillRate > maxFillRate
}
func (m *Map[Key, Value]) insertElement(element *ListElement[Key, Value], hash uintptr, key Key, value Value) bool {
var existed, inserted bool
for {
store := m.store.Load()
searchStart := store.item(element.keyHash)
if !inserted { // if retrying after insert during grow, do not add to list again
_, existed, inserted = m.linkedList.Add(searchStart, hash, key, value)
if existed {
return false
}
if !inserted {
continue // a concurrent add did interfere, try again
}
}
count := store.addItem(element)
currentStore := m.store.Load()
if store != currentStore { // retry insert in case of insert during grow
continue
}
if m.isResizeNeeded(store, count) && m.resizing.CompareAndSwap(0, 1) {
go m.grow(0, true)
}
return true
}
}
// deleteElement deletes an element from index.
func (m *Map[Key, Value]) deleteElement(element *ListElement[Key, Value]) {
for {
store := m.store.Load()
index := element.keyHash >> store.keyShifts
ptr := (*unsafe.Pointer)(unsafe.Pointer(uintptr(store.array) + index*intSizeBytes))
next := element.Next()
if next != nil && element.keyHash>>store.keyShifts != index {
next = nil // do not set index to next item if it's not the same slice index
}
atomic.CompareAndSwapPointer(ptr, unsafe.Pointer(element), unsafe.Pointer(next))
currentStore := m.store.Load()
if store == currentStore { // check that no resize happened
break
}
}
}
func (m *Map[Key, Value]) grow(newSize uintptr, loop bool) {
defer m.resizing.CompareAndSwap(1, 0)
for {
currentStore := m.store.Load()
if newSize == 0 {
newSize = uintptr(len(currentStore.index)) << 1
} else {
newSize = roundUpPower2(newSize)
}
index := make([]*ListElement[Key, Value], newSize)
header := (*reflect.SliceHeader)(unsafe.Pointer(&index))
newStore := &store[Key, Value]{
keyShifts: strconv.IntSize - log2(newSize),
array: unsafe.Pointer(header.Data), // use address of slice data storage
index: index,
}
m.fillIndexItems(newStore) // initialize new index slice with longer keys
m.store.Store(newStore)
m.fillIndexItems(newStore) // make sure that the new index is up-to-date with the current state of the linked list
if !loop {
return
}
// check if a new resize needs to be done already
count := uintptr(m.Len())
if !m.isResizeNeeded(newStore, count) {
return
}
newSize = 0 // 0 means double the current size
}
}
func (m *Map[Key, Value]) fillIndexItems(store *store[Key, Value]) {
first := m.linkedList.First()
item := first
lastIndex := uintptr(0)
for item != nil {
index := item.keyHash >> store.keyShifts
if item == first || index != lastIndex { // store item with smallest hash key for every index
store.addItem(item)
lastIndex = index
}
item = item.Next()
}
}

127
vendor/github.com/cornelk/hashmap/list.go generated vendored Normal file
View File

@@ -0,0 +1,127 @@
package hashmap
import (
"sync/atomic"
)
// List is a sorted linked list.
type List[Key comparable, Value any] struct {
count atomic.Uintptr
head *ListElement[Key, Value]
}
// NewList returns an initialized list.
func NewList[Key comparable, Value any]() *List[Key, Value] {
return &List[Key, Value]{
head: &ListElement[Key, Value]{},
}
}
// Len returns the number of elements within the list.
func (l *List[Key, Value]) Len() int {
return int(l.count.Load())
}
// First returns the first item of the list.
func (l *List[Key, Value]) First() *ListElement[Key, Value] {
return l.head.Next()
}
// Add adds an item to the list and returns false if an item for the hash existed.
// searchStart = nil will start to search at the head item.
func (l *List[Key, Value]) Add(searchStart *ListElement[Key, Value], hash uintptr, key Key, value Value) (element *ListElement[Key, Value], existed bool, inserted bool) {
left, found, right := l.search(searchStart, hash, key)
if found != nil { // existing item found
return found, true, false
}
element = &ListElement[Key, Value]{
key: key,
keyHash: hash,
}
element.value.Store(&value)
return element, false, l.insertAt(element, left, right)
}
// AddOrUpdate adds or updates an item to the list.
func (l *List[Key, Value]) AddOrUpdate(searchStart *ListElement[Key, Value], hash uintptr, key Key, value Value) (*ListElement[Key, Value], bool) {
left, found, right := l.search(searchStart, hash, key)
if found != nil { // existing item found
found.value.Store(&value) // update the value
return found, true
}
element := &ListElement[Key, Value]{
key: key,
keyHash: hash,
}
element.value.Store(&value)
return element, l.insertAt(element, left, right)
}
// Delete deletes an element from the list.
func (l *List[Key, Value]) Delete(element *ListElement[Key, Value]) {
if !element.deleted.CompareAndSwap(0, 1) {
return // concurrent delete of the item is in progress
}
right := element.Next()
// point head to next element if element to delete was head
l.head.next.CompareAndSwap(element, right)
// element left from the deleted element will replace its next
// pointer to the next valid element on call of Next().
l.count.Add(^uintptr(0)) // decrease counter
}
func (l *List[Key, Value]) search(searchStart *ListElement[Key, Value], hash uintptr, key Key) (left, found, right *ListElement[Key, Value]) {
if searchStart != nil && hash < searchStart.keyHash { // key would remain left from item? {
searchStart = nil // start search at head
}
if searchStart == nil { // start search at head?
left = l.head
found = left.Next()
if found == nil { // no items beside head?
return nil, nil, nil
}
} else {
found = searchStart
}
for {
if hash == found.keyHash && key == found.key { // key hash already exists, compare keys
return nil, found, nil
}
if hash < found.keyHash { // new item needs to be inserted before the found value
if l.head == left {
return nil, nil, found
}
return left, nil, found
}
// go to next element in sorted linked list
left = found
found = left.Next()
if found == nil { // no more items on the right
return left, nil, nil
}
}
}
func (l *List[Key, Value]) insertAt(element, left, right *ListElement[Key, Value]) bool {
if left == nil {
left = l.head
}
element.next.Store(right)
if !left.next.CompareAndSwap(right, element) {
return false // item was modified concurrently
}
l.count.Add(1)
return true
}

47
vendor/github.com/cornelk/hashmap/list_element.go generated vendored Normal file
View File

@@ -0,0 +1,47 @@
package hashmap
import (
"sync/atomic"
)
// ListElement is an element of a list.
type ListElement[Key comparable, Value any] struct {
keyHash uintptr
// deleted marks the item as deleting or deleted
// this is using uintptr instead of atomic.Bool to avoid using 32 bit int on 64 bit systems
deleted atomic.Uintptr
// next points to the next element in the list.
// it is nil for the last item in the list.
next atomic.Pointer[ListElement[Key, Value]]
value atomic.Pointer[Value]
key Key
}
// Value returns the value of the list item.
func (e *ListElement[Key, Value]) Value() Value {
return *e.value.Load()
}
// Next returns the item on the right.
func (e *ListElement[Key, Value]) Next() *ListElement[Key, Value] {
for next := e.next.Load(); next != nil; {
// if the next item is not deleted, return it
if next.deleted.Load() == 0 {
return next
}
// point current elements next to the following item
// after the deleted one until a non deleted or list end is found
following := next.Next()
if e.next.CompareAndSwap(next, following) {
next = following
} else {
next = next.Next()
}
}
return nil // end of the list reached
}

45
vendor/github.com/cornelk/hashmap/store.go generated vendored Normal file
View File

@@ -0,0 +1,45 @@
package hashmap
import (
"sync/atomic"
"unsafe"
)
type store[Key comparable, Value any] struct {
keyShifts uintptr // Pointer size - log2 of array size, to be used as index in the data array
count atomic.Uintptr // count of filled elements in the slice
array unsafe.Pointer // pointer to slice data array
index []*ListElement[Key, Value] // storage for the slice for the garbage collector to not clean it up
}
// item returns the item for the given hashed key.
func (s *store[Key, Value]) item(hashedKey uintptr) *ListElement[Key, Value] {
index := hashedKey >> s.keyShifts
ptr := (*unsafe.Pointer)(unsafe.Pointer(uintptr(s.array) + index*intSizeBytes))
item := (*ListElement[Key, Value])(atomic.LoadPointer(ptr))
return item
}
// adds an item to the index if needed and returns the new item counter if it changed, otherwise 0.
func (s *store[Key, Value]) addItem(item *ListElement[Key, Value]) uintptr {
index := item.keyHash >> s.keyShifts
ptr := (*unsafe.Pointer)(unsafe.Pointer(uintptr(s.array) + index*intSizeBytes))
for { // loop until the smallest key hash is in the index
element := (*ListElement[Key, Value])(atomic.LoadPointer(ptr)) // get the current item in the index
if element == nil { // no item yet at this index
if atomic.CompareAndSwapPointer(ptr, nil, unsafe.Pointer(item)) {
return s.count.Add(1)
}
continue // a new item was inserted concurrently, retry
}
if item.keyHash < element.keyHash {
// the new item is the smallest for this index?
if !atomic.CompareAndSwapPointer(ptr, unsafe.Pointer(element), unsafe.Pointer(item)) {
continue // a new item was inserted concurrently, retry
}
}
return 0
}
}

32
vendor/github.com/cornelk/hashmap/util.go generated vendored Normal file
View File

@@ -0,0 +1,32 @@
package hashmap
import (
"strconv"
)
const (
// intSizeBytes is the size in byte of an int or uint value.
intSizeBytes = strconv.IntSize >> 3
)
// roundUpPower2 rounds a number to the next power of 2.
func roundUpPower2(i uintptr) uintptr {
i--
i |= i >> 1
i |= i >> 2
i |= i >> 4
i |= i >> 8
i |= i >> 16
i |= i >> 32
i++
return i
}
// log2 computes the binary logarithm of x, rounded up to the next integer.
func log2(i uintptr) uintptr {
var n, p uintptr
for p = 1; p < i; p += p {
n++
}
return n
}

258
vendor/github.com/cornelk/hashmap/util_hash.go generated vendored Normal file
View File

@@ -0,0 +1,258 @@
package hashmap
import (
"encoding/binary"
"fmt"
"math/bits"
"reflect"
"unsafe"
)
const (
prime1 uint64 = 11400714785074694791
prime2 uint64 = 14029467366897019727
prime3 uint64 = 1609587929392839161
prime4 uint64 = 9650029242287828579
prime5 uint64 = 2870177450012600261
)
var prime1v = prime1
/*
Copyright (c) 2016 Caleb Spare
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
// setDefaultHasher sets the default hasher depending on the key type.
// Inlines hashing as anonymous functions for performance improvements, other options like
// returning an anonymous functions from another function turned out to not be as performant.
func (m *Map[Key, Value]) setDefaultHasher() {
var key Key
kind := reflect.ValueOf(&key).Elem().Type().Kind()
switch kind {
case reflect.Int, reflect.Uint, reflect.Uintptr:
switch intSizeBytes {
case 2:
m.hasher = *(*func(Key) uintptr)(unsafe.Pointer(&xxHashWord))
case 4:
m.hasher = *(*func(Key) uintptr)(unsafe.Pointer(&xxHashDword))
case 8:
m.hasher = *(*func(Key) uintptr)(unsafe.Pointer(&xxHashQword))
default:
panic(fmt.Errorf("unsupported integer byte size %d", intSizeBytes))
}
case reflect.Int8, reflect.Uint8:
m.hasher = *(*func(Key) uintptr)(unsafe.Pointer(&xxHashByte))
case reflect.Int16, reflect.Uint16:
m.hasher = *(*func(Key) uintptr)(unsafe.Pointer(&xxHashWord))
case reflect.Int32, reflect.Uint32:
m.hasher = *(*func(Key) uintptr)(unsafe.Pointer(&xxHashDword))
case reflect.Int64, reflect.Uint64:
m.hasher = *(*func(Key) uintptr)(unsafe.Pointer(&xxHashQword))
case reflect.Float32:
m.hasher = *(*func(Key) uintptr)(unsafe.Pointer(&xxHashFloat32))
case reflect.Float64:
m.hasher = *(*func(Key) uintptr)(unsafe.Pointer(&xxHashFloat64))
case reflect.String:
m.hasher = *(*func(Key) uintptr)(unsafe.Pointer(&xxHashString))
default:
panic(fmt.Errorf("unsupported key type %T of kind %v", key, kind))
}
}
// Specialized xxhash hash functions, optimized for the bit size of the key where available,
// for all supported types beside string.
var xxHashByte = func(key uint8) uintptr {
h := prime5 + 1
h ^= uint64(key) * prime5
h = bits.RotateLeft64(h, 11) * prime1
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return uintptr(h)
}
var xxHashWord = func(key uint16) uintptr {
h := prime5 + 2
h ^= (uint64(key) & 0xff) * prime5
h = bits.RotateLeft64(h, 11) * prime1
h ^= ((uint64(key) >> 8) & 0xff) * prime5
h = bits.RotateLeft64(h, 11) * prime1
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return uintptr(h)
}
var xxHashDword = func(key uint32) uintptr {
h := prime5 + 4
h ^= uint64(key) * prime1
h = bits.RotateLeft64(h, 23)*prime2 + prime3
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return uintptr(h)
}
var xxHashFloat32 = func(key float32) uintptr {
h := prime5 + 4
h ^= uint64(key) * prime1
h = bits.RotateLeft64(h, 23)*prime2 + prime3
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return uintptr(h)
}
var xxHashFloat64 = func(key float64) uintptr {
h := prime5 + 4
h ^= uint64(key) * prime1
h = bits.RotateLeft64(h, 23)*prime2 + prime3
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return uintptr(h)
}
var xxHashQword = func(key uint64) uintptr {
k1 := key * prime2
k1 = bits.RotateLeft64(k1, 31)
k1 *= prime1
h := (prime5 + 8) ^ k1
h = bits.RotateLeft64(h, 27)*prime1 + prime4
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return uintptr(h)
}
var xxHashString = func(key string) uintptr {
sh := (*reflect.StringHeader)(unsafe.Pointer(&key))
bh := reflect.SliceHeader{
Data: sh.Data,
Len: sh.Len,
Cap: sh.Len, // cap needs to be set, otherwise xxhash fails on ARM Macs
}
b := *(*[]byte)(unsafe.Pointer(&bh))
var h uint64
if sh.Len >= 32 {
v1 := prime1v + prime2
v2 := prime2
v3 := uint64(0)
v4 := -prime1v
for len(b) >= 32 {
v1 = round(v1, binary.LittleEndian.Uint64(b[0:8:len(b)]))
v2 = round(v2, binary.LittleEndian.Uint64(b[8:16:len(b)]))
v3 = round(v3, binary.LittleEndian.Uint64(b[16:24:len(b)]))
v4 = round(v4, binary.LittleEndian.Uint64(b[24:32:len(b)]))
b = b[32:len(b):len(b)]
}
h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
h = mergeRound(h, v1)
h = mergeRound(h, v2)
h = mergeRound(h, v3)
h = mergeRound(h, v4)
} else {
h = prime5
}
h += uint64(sh.Len)
i, end := 0, len(b)
for ; i+8 <= end; i += 8 {
k1 := round(0, binary.LittleEndian.Uint64(b[i:i+8:len(b)]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
if i+4 <= end {
h ^= uint64(binary.LittleEndian.Uint32(b[i:i+4:len(b)])) * prime1
h = rol23(h)*prime2 + prime3
i += 4
}
for ; i < end; i++ {
h ^= uint64(b[i]) * prime5
h = rol11(h) * prime1
}
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return uintptr(h)
}
func round(acc, input uint64) uint64 {
acc += input * prime2
acc = rol31(acc)
acc *= prime1
return acc
}
func mergeRound(acc, val uint64) uint64 {
val = round(0, val)
acc ^= val
acc = acc*prime1 + prime4
return acc
}
func rol1(x uint64) uint64 { return bits.RotateLeft64(x, 1) }
func rol7(x uint64) uint64 { return bits.RotateLeft64(x, 7) }
func rol11(x uint64) uint64 { return bits.RotateLeft64(x, 11) }
func rol12(x uint64) uint64 { return bits.RotateLeft64(x, 12) }
func rol18(x uint64) uint64 { return bits.RotateLeft64(x, 18) }
func rol23(x uint64) uint64 { return bits.RotateLeft64(x, 23) }
func rol27(x uint64) uint64 { return bits.RotateLeft64(x, 27) }
func rol31(x uint64) uint64 { return bits.RotateLeft64(x, 31) }

View File

@@ -151,12 +151,11 @@ func (b MessagePackBackend) saveAttributes(ctx context.Context, path string, set
_, subspan := tracer.Start(ctx, "lockedfile.OpenFile")
f, err = lockedfile.OpenFile(lockPath, os.O_RDWR|os.O_CREATE, 0600)
subspan.End()
if err != nil {
return err
}
defer f.Close()
}
if err != nil {
return err
}
// Read current state
_, subspan := tracer.Start(ctx, "os.ReadFile")
var msgBytes []byte

View File

@@ -97,6 +97,8 @@ func ServiceAccountPermissions() provider.ResourcePermissions {
RemoveGrant: true, // for share expiry
ListRecycle: true, // for purge-trash-bin command
PurgeRecycle: true, // for purge-trash-bin command
RestoreRecycleItem: true, // for cli restore command
Delete: true, // for cli restore command with replace option
}
}

View File

@@ -26,6 +26,7 @@ import (
"github.com/cs3org/reva/v2/pkg/store/etcd"
"github.com/cs3org/reva/v2/pkg/store/memory"
natsjs "github.com/go-micro/plugins/v4/store/nats-js"
natsjskv "github.com/go-micro/plugins/v4/store/nats-js-kv"
"github.com/go-micro/plugins/v4/store/redis"
redisopts "github.com/go-redis/redis/v8"
"github.com/nats-io/nats.go"
@@ -50,6 +51,8 @@ const (
TypeOCMem = "ocmem"
// TypeNatsJS represents nats-js stores
TypeNatsJS = "nats-js"
// TypeNatsJSKV represents nats-js-kv stores
TypeNatsJSKV = "nats-js-kv"
)
// Create initializes a new store
@@ -126,6 +129,16 @@ func Create(opts ...microstore.Option) microstore.Store {
natsjs.NatsOptions(natsOptions), // always pass in properly initialized default nats options
natsjs.DefaultTTL(ttl))...,
) // TODO test with ocis nats
case TypeNatsJSKV:
// NOTE: nats needs a DefaultTTL option as it does not support per Write TTL ...
ttl, _ := options.Context.Value(ttlContextKey{}).(time.Duration)
natsOptions := nats.GetDefaultOptions()
natsOptions.Name = "TODO" // we can pass in the service name to allow identifying the client, but that requires adding a custom context option
return natsjskv.NewStore(
append(opts,
natsjs.NatsOptions(natsOptions), // always pass in properly initialized default nats options
natsjs.DefaultTTL(ttl))...,
)
case TypeMemory, "mem", "": // allow existing short form and use as default
return microstore.NewMemoryStore(opts...)
default:

View File

@@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2015 Asim Aslam.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,79 @@
# NATS JetStream Key Value Store Plugin
This plugin uses the NATS JetStream [KeyValue Store](https://docs.nats.io/nats-concepts/jetstream/key-value-store) to implement the Go-Micro store interface.
You can use this plugin like any other store plugin.
To start a local NATS JetStream server run `nats-server -js`.
To manually create a new storage object call:
```go
natsjskv.NewStore(opts ...store.Option)
```
The Go-Micro store interface uses databases and tables to store keys. These translate
to buckets (key value stores) and key prefixes. If no database (bucket name) is provided, "default" will be used.
You can call `Write` with any arbitrary database name, and if a bucket with that name does not exist yet,
it will be automatically created.
If a table name is provided, it will use it to prefix the key as `<table>_<key>`.
To delete a bucket, and all the key/value pairs in it, pass the `DeleteBucket` option to the `Delete`
method, then they key name will be interpreted as a bucket name, and the bucket will be deleted.
Next to the default store options, a few NATS specific options are available:
```go
// NatsOptions accepts nats.Options
NatsOptions(opts nats.Options)
// JetStreamOptions accepts multiple nats.JSOpt
JetStreamOptions(opts ...nats.JSOpt)
// KeyValueOptions accepts multiple nats.KeyValueConfig
// This will create buckets with the provided configs at initialization.
//
// type KeyValueConfig struct {
// Bucket string
// Description string
// MaxValueSize int32
// History uint8
// TTL time.Duration
// MaxBytes int64
// Storage StorageType
// Replicas int
// Placement *Placement
// RePublish *RePublish
// Mirror *StreamSource
// Sources []*StreamSource
}
KeyValueOptions(cfg ...*nats.KeyValueConfig)
// DefaultTTL sets the default TTL to use for new buckets
// By default no TTL is set.
//
// TTL ON INDIVIDUAL WRITE CALLS IS NOT SUPPORTED, only bucket wide TTL.
// Either set a default TTL with this option or provide bucket specific options
// with ObjectStoreOptions
DefaultTTL(ttl time.Duration)
// DefaultMemory sets the default storage type to memory only.
//
// The default is file storage, persisting storage between service restarts.
// Be aware that the default storage location of NATS the /tmp dir is, and thus
// won't persist reboots.
DefaultMemory()
// DefaultDescription sets the default description to use when creating new
// buckets. The default is "Store managed by go-micro"
DefaultDescription(text string)
// DeleteBucket will use the key passed to Delete as a bucket (database) name,
// and delete the bucket.
// This option should not be combined with the store.DeleteFrom option, as
// that will overwrite the delete action.
DeleteBucket()
```

View File

@@ -0,0 +1,18 @@
package natsjskv
import (
"context"
"go-micro.dev/v4/store"
)
// setStoreOption returns a function to setup a context with given value.
func setStoreOption(k, v interface{}) store.Option {
return func(o *store.Options) {
if o.Context == nil {
o.Context = context.Background()
}
o.Context = context.WithValue(o.Context, k, v)
}
}

View File

@@ -0,0 +1,109 @@
package natsjskv
import (
"encoding/base32"
"strings"
)
// NatsKey is a convenience function to create a key for the nats kv store.
func NatsKey(table, microkey string) string {
return NewKey(table, microkey, "").NatsKey()
}
// MicroKey is a convenience function to create a key for the micro interface.
func MicroKey(table, natskey string) string {
return NewKey(table, "", natskey).MicroKey()
}
// MicroKeyFilter is a convenience function to create a key for the micro interface.
// It returns false if the key does not match the table, prefix or suffix.
func MicroKeyFilter(table, natskey string, prefix, suffix string) (string, bool) {
k := NewKey(table, "", natskey)
return k.MicroKey(), k.Check(table, prefix, suffix)
}
// Key represents a key in the store.
// They are used to convert nats keys (base64 encoded) to micro keys (plain text - no table prefix) and vice versa.
type Key struct {
// Plain is the plain key as requested by the go-micro interface.
Plain string
// Full is the full key including the table prefix.
Full string
// Encoded is the base64 encoded key as used by the nats kv store.
Encoded string
}
// NewKey creates a new key. Either plain or encoded must be set.
func NewKey(table string, plain, encoded string) *Key {
k := &Key{
Plain: plain,
Encoded: encoded,
}
switch {
case k.Plain != "":
k.Full = getKey(k.Plain, table)
k.Encoded = encode(k.Full)
case k.Encoded != "":
k.Full = decode(k.Encoded)
k.Plain = trimKey(k.Full, table)
}
return k
}
// NatsKey returns a key the nats kv store can work with.
func (k *Key) NatsKey() string {
return k.Encoded
}
// MicroKey returns a key the micro interface can work with.
func (k *Key) MicroKey() string {
return k.Plain
}
// Check returns false if the key does not match the table, prefix or suffix.
func (k *Key) Check(table, prefix, suffix string) bool {
if table != "" && k.Full != getKey(k.Plain, table) {
return false
}
if prefix != "" && !strings.HasPrefix(k.Plain, prefix) {
return false
}
if suffix != "" && !strings.HasSuffix(k.Plain, suffix) {
return false
}
return true
}
func encode(s string) string {
return base32.StdEncoding.EncodeToString([]byte(s))
}
func decode(s string) string {
b, err := base32.StdEncoding.DecodeString(s)
if err != nil {
return s
}
return string(b)
}
func getKey(key, table string) string {
if table != "" {
return table + "_" + key
}
return key
}
func trimKey(key, table string) string {
if table != "" {
return strings.TrimPrefix(key, table+"_")
}
return key
}

View File

@@ -0,0 +1,479 @@
// Package natsjskv is a go-micro store plugin for NATS JetStream Key-Value store.
package natsjskv
import (
"context"
"encoding/json"
"sync"
"time"
"github.com/cornelk/hashmap"
"github.com/nats-io/nats.go"
"github.com/pkg/errors"
"go-micro.dev/v4/store"
"go-micro.dev/v4/util/cmd"
)
var (
// ErrBucketNotFound is returned when the requested bucket does not exist.
ErrBucketNotFound = errors.New("Bucket (database) not found")
)
// KeyValueEnvelope is the data structure stored in the key value store.
type KeyValueEnvelope struct {
Key string `json:"key"`
Data []byte `json:"data"`
Metadata map[string]interface{} `json:"metadata"`
}
type natsStore struct {
sync.Once
sync.RWMutex
ttl time.Duration
storageType nats.StorageType
description string
opts store.Options
nopts nats.Options
jsopts []nats.JSOpt
kvConfigs []*nats.KeyValueConfig
conn *nats.Conn
js nats.JetStreamContext
buckets *hashmap.Map[string, nats.KeyValue]
}
func init() {
cmd.DefaultStores["natsjskv"] = NewStore
}
// NewStore will create a new NATS JetStream Object Store.
func NewStore(opts ...store.Option) store.Store {
options := store.Options{
Nodes: []string{},
Database: "default",
Table: "",
Context: context.Background(),
}
n := &natsStore{
description: "KeyValue storage administered by go-micro store plugin",
opts: options,
jsopts: []nats.JSOpt{},
kvConfigs: []*nats.KeyValueConfig{},
buckets: hashmap.New[string, nats.KeyValue](),
storageType: nats.FileStorage,
}
n.setOption(opts...)
return n
}
// Init initializes the store. It must perform any required setup on the
// backing storage implementation and check that it is ready for use,
// returning any errors.
func (n *natsStore) Init(opts ...store.Option) error {
n.setOption(opts...)
// Connect to NATS servers
conn, err := n.nopts.Connect()
if err != nil {
return errors.Wrap(err, "Failed to connect to NATS Server")
}
// Create JetStream context
js, err := conn.JetStream(n.jsopts...)
if err != nil {
return errors.Wrap(err, "Failed to create JetStream context")
}
n.conn = conn
n.js = js
// Create default config if no configs present
if len(n.kvConfigs) == 0 {
if _, err := n.mustGetBucketByName(n.opts.Database); err != nil {
return err
}
}
// Create kv store buckets
for _, cfg := range n.kvConfigs {
if _, err := n.mustGetBucket(cfg); err != nil {
return err
}
}
return nil
}
func (n *natsStore) setOption(opts ...store.Option) {
for _, o := range opts {
o(&n.opts)
}
n.Once.Do(func() {
n.nopts = nats.GetDefaultOptions()
})
// Extract options from context
if nopts, ok := n.opts.Context.Value(natsOptionsKey{}).(nats.Options); ok {
n.nopts = nopts
}
if jsopts, ok := n.opts.Context.Value(jsOptionsKey{}).([]nats.JSOpt); ok {
n.jsopts = append(n.jsopts, jsopts...)
}
if cfg, ok := n.opts.Context.Value(kvOptionsKey{}).([]*nats.KeyValueConfig); ok {
n.kvConfigs = append(n.kvConfigs, cfg...)
}
if ttl, ok := n.opts.Context.Value(ttlOptionsKey{}).(time.Duration); ok {
n.ttl = ttl
}
if sType, ok := n.opts.Context.Value(memoryOptionsKey{}).(nats.StorageType); ok {
n.storageType = sType
}
if text, ok := n.opts.Context.Value(descriptionOptionsKey{}).(string); ok {
n.description = text
}
// Assign store option server addresses to nats options
if len(n.opts.Nodes) > 0 {
n.nopts.Url = ""
n.nopts.Servers = n.opts.Nodes
}
if len(n.nopts.Servers) == 0 && n.nopts.Url == "" {
n.nopts.Url = nats.DefaultURL
}
}
// Options allows you to view the current options.
func (n *natsStore) Options() store.Options {
return n.opts
}
// Read takes a single key name and optional ReadOptions. It returns matching []*Record or an error.
func (n *natsStore) Read(key string, opts ...store.ReadOption) ([]*store.Record, error) {
if err := n.initConn(); err != nil {
return nil, err
}
opt := store.ReadOptions{}
for _, o := range opts {
o(&opt)
}
if opt.Database == "" {
opt.Database = n.opts.Database
}
if opt.Table == "" {
opt.Table = n.opts.Table
}
bucket, ok := n.buckets.Get(opt.Database)
if !ok {
return nil, ErrBucketNotFound
}
keys, err := n.natsKeys(bucket, opt.Table, key, opt.Prefix, opt.Suffix)
if err != nil {
return nil, err
}
records := make([]*store.Record, 0, len(keys))
for _, key := range keys {
rec, ok, err := n.getRecord(bucket, key)
if err != nil {
return nil, err
}
if ok {
records = append(records, rec)
}
}
return enforceLimits(records, opt.Limit, opt.Offset), nil
}
// Write writes a record to the store, and returns an error if the record was not written.
func (n *natsStore) Write(rec *store.Record, opts ...store.WriteOption) error {
if err := n.initConn(); err != nil {
return err
}
opt := store.WriteOptions{}
for _, o := range opts {
o(&opt)
}
if opt.Database == "" {
opt.Database = n.opts.Database
}
if opt.Table == "" {
opt.Table = n.opts.Table
}
store, err := n.mustGetBucketByName(opt.Database)
if err != nil {
return err
}
b, err := json.Marshal(KeyValueEnvelope{
Key: rec.Key,
Data: rec.Value,
Metadata: rec.Metadata,
})
if err != nil {
return errors.Wrap(err, "Failed to marshal object")
}
if _, err := store.Put(NatsKey(opt.Table, rec.Key), b); err != nil {
return errors.Wrapf(err, "Failed to store data in bucket '%s'", NatsKey(opt.Table, rec.Key))
}
return nil
}
// Delete removes the record with the corresponding key from the store.
func (n *natsStore) Delete(key string, opts ...store.DeleteOption) error {
if err := n.initConn(); err != nil {
return err
}
opt := store.DeleteOptions{}
for _, o := range opts {
o(&opt)
}
if opt.Database == "" {
opt.Database = n.opts.Database
}
if opt.Table == "" {
opt.Table = n.opts.Table
}
if opt.Table == "DELETE_BUCKET" {
n.buckets.Del(key)
if err := n.js.DeleteKeyValue(key); err != nil {
return errors.Wrap(err, "Failed to delete bucket")
}
return nil
}
store, ok := n.buckets.Get(opt.Database)
if !ok {
return ErrBucketNotFound
}
if err := store.Delete(NatsKey(opt.Table, key)); err != nil {
return errors.Wrap(err, "Failed to delete data")
}
return nil
}
// List returns any keys that match, or an empty list with no error if none matched.
func (n *natsStore) List(opts ...store.ListOption) ([]string, error) {
if err := n.initConn(); err != nil {
return nil, err
}
opt := store.ListOptions{}
for _, o := range opts {
o(&opt)
}
if opt.Database == "" {
opt.Database = n.opts.Database
}
if opt.Table == "" {
opt.Table = n.opts.Table
}
store, ok := n.buckets.Get(opt.Database)
if !ok {
return nil, ErrBucketNotFound
}
keys, err := n.microKeys(store, opt.Table, opt.Prefix, opt.Suffix)
if err != nil {
return nil, errors.Wrap(err, "Failed to list keys in bucket")
}
return enforceLimits(keys, opt.Limit, opt.Offset), nil
}
// Close the store.
func (n *natsStore) Close() error {
n.conn.Close()
return nil
}
// String returns the name of the implementation.
func (n *natsStore) String() string {
return "NATS JetStream KeyValueStore"
}
// thread safe way to initialize the connection.
func (n *natsStore) initConn() error {
if n.hasConn() {
return nil
}
n.Lock()
defer n.Unlock()
// check if conn was initialized meanwhile
if n.conn != nil {
return nil
}
return n.Init()
}
// thread safe way to check if n is initialized.
func (n *natsStore) hasConn() bool {
n.RLock()
defer n.RUnlock()
return n.conn != nil
}
// mustGetDefaultBucket returns the bucket with the given name creating it with default configuration if needed.
func (n *natsStore) mustGetBucketByName(name string) (nats.KeyValue, error) {
return n.mustGetBucket(&nats.KeyValueConfig{
Bucket: name,
Description: n.description,
TTL: n.ttl,
Storage: n.storageType,
})
}
// mustGetBucket creates a new bucket if it does not exist yet.
func (n *natsStore) mustGetBucket(kv *nats.KeyValueConfig) (nats.KeyValue, error) {
if store, ok := n.buckets.Get(kv.Bucket); ok {
return store, nil
}
store, err := n.js.KeyValue(kv.Bucket)
if err != nil {
if !errors.Is(err, nats.ErrBucketNotFound) {
return nil, errors.Wrapf(err, "Failed to get bucket (%s)", kv.Bucket)
}
store, err = n.js.CreateKeyValue(kv)
if err != nil {
return nil, errors.Wrapf(err, "Failed to create bucket (%s)", kv.Bucket)
}
}
n.buckets.Set(kv.Bucket, store)
return store, nil
}
// getRecord returns the record with the given key from the nats kv store.
func (n *natsStore) getRecord(bucket nats.KeyValue, key string) (*store.Record, bool, error) {
obj, err := bucket.Get(key)
if errors.Is(err, nats.ErrKeyNotFound) {
return nil, false, nil
} else if err != nil {
return nil, false, errors.Wrap(err, "Failed to get object from bucket")
}
var kv KeyValueEnvelope
if err := json.Unmarshal(obj.Value(), &kv); err != nil {
return nil, false, errors.Wrap(err, "Failed to unmarshal object")
}
if obj.Operation() != nats.KeyValuePut {
return nil, false, nil
}
return &store.Record{
Key: kv.Key,
Value: kv.Data,
Metadata: kv.Metadata,
}, true, nil
}
func (n *natsStore) natsKeys(bucket nats.KeyValue, table, key string, prefix, suffix bool) ([]string, error) {
if !suffix && !prefix {
return []string{NatsKey(table, key)}, nil
}
toS := func(s string, b bool) string {
if b {
return s
}
return ""
}
keys, _, err := n.getKeys(bucket, table, toS(key, prefix), toS(key, suffix))
return keys, err
}
func (n *natsStore) microKeys(bucket nats.KeyValue, table, prefix, suffix string) ([]string, error) {
_, keys, err := n.getKeys(bucket, table, prefix, suffix)
return keys, err
}
func (n *natsStore) getKeys(bucket nats.KeyValue, table string, prefix, suffix string) ([]string, []string, error) {
names, err := bucket.Keys(nats.IgnoreDeletes())
if errors.Is(err, nats.ErrKeyNotFound) {
return []string{}, []string{}, nil
} else if err != nil {
return []string{}, []string{}, errors.Wrap(err, "Failed to list objects")
}
natsKeys := make([]string, 0, len(names))
microKeys := make([]string, 0, len(names))
for _, k := range names {
mkey, ok := MicroKeyFilter(table, k, prefix, suffix)
if !ok {
continue
}
natsKeys = append(natsKeys, k)
microKeys = append(microKeys, mkey)
}
return natsKeys, microKeys, nil
}
// enforces offset and limit without causing a panic.
func enforceLimits[V any](recs []V, limit, offset uint) []V {
l := uint(len(recs))
from := offset
if from > l {
from = l
}
to := l
if limit > 0 && offset+limit < l {
to = offset + limit
}
return recs[from:to]
}

View File

@@ -0,0 +1,75 @@
package natsjskv
import (
"time"
"github.com/nats-io/nats.go"
"go-micro.dev/v4/store"
)
// store.Option.
type natsOptionsKey struct{}
type jsOptionsKey struct{}
type kvOptionsKey struct{}
type ttlOptionsKey struct{}
type memoryOptionsKey struct{}
type descriptionOptionsKey struct{}
// NatsOptions accepts nats.Options.
func NatsOptions(opts nats.Options) store.Option {
return setStoreOption(natsOptionsKey{}, opts)
}
// JetStreamOptions accepts multiple nats.JSOpt.
func JetStreamOptions(opts ...nats.JSOpt) store.Option {
return setStoreOption(jsOptionsKey{}, opts)
}
// KeyValueOptions accepts multiple nats.KeyValueConfig
// This will create buckets with the provided configs at initialization.
func KeyValueOptions(cfg ...*nats.KeyValueConfig) store.Option {
return setStoreOption(kvOptionsKey{}, cfg)
}
// DefaultTTL sets the default TTL to use for new buckets
//
// By default no TTL is set.
//
// TTL ON INDIVIDUAL WRITE CALLS IS NOT SUPPORTED, only bucket wide TTL.
// Either set a default TTL with this option or provide bucket specific options
//
// with ObjectStoreOptions
func DefaultTTL(ttl time.Duration) store.Option {
return setStoreOption(ttlOptionsKey{}, ttl)
}
// DefaultMemory sets the default storage type to memory only.
//
// The default is file storage, persisting storage between service restarts.
//
// Be aware that the default storage location of NATS the /tmp dir is, and thus
//
// won't persist reboots.
func DefaultMemory() store.Option {
return setStoreOption(memoryOptionsKey{}, nats.MemoryStorage)
}
// DefaultDescription sets the default description to use when creating new
//
// buckets. The default is "Store managed by go-micro"
func DefaultDescription(text string) store.Option {
return setStoreOption(descriptionOptionsKey{}, text)
}
// DeleteBucket will use the key passed to Delete as a bucket (database) name,
//
// and delete the bucket.
//
// This option should not be combined with the store.DeleteFrom option, as
//
// that will overwrite the delete action.
func DeleteBucket() store.DeleteOption {
return func(d *store.DeleteOptions) {
d.Table = "DELETE_BUCKET"
}
}

View File

@@ -0,0 +1,138 @@
package natsjskv
import "go-micro.dev/v4/store"
type test struct {
Record *store.Record
Database string
Table string
}
var (
table = []test{
{
Record: &store.Record{
Key: "One",
Value: []byte("First value"),
},
},
{
Record: &store.Record{
Key: "Two",
Value: []byte("Second value"),
},
Table: "prefix_test",
},
{
Record: &store.Record{
Key: "Third",
Value: []byte("Third value"),
},
Database: "new-bucket",
},
{
Record: &store.Record{
Key: "Four",
Value: []byte("Fourth value"),
},
Database: "new-bucket",
Table: "prefix_test",
},
{
Record: &store.Record{
Key: "empty-value",
Value: []byte{},
},
Database: "new-bucket",
},
{
Record: &store.Record{
Key: "Alex",
Value: []byte("Some value"),
},
Database: "prefix-test",
Table: "names",
},
{
Record: &store.Record{
Key: "Jones",
Value: []byte("Some value"),
},
Database: "prefix-test",
Table: "names",
},
{
Record: &store.Record{
Key: "Adrianna",
Value: []byte("Some value"),
},
Database: "prefix-test",
Table: "names",
},
{
Record: &store.Record{
Key: "MexicoCity",
Value: []byte("Some value"),
},
Database: "prefix-test",
Table: "cities",
},
{
Record: &store.Record{
Key: "HoustonCity",
Value: []byte("Some value"),
},
Database: "prefix-test",
Table: "cities",
},
{
Record: &store.Record{
Key: "ZurichCity",
Value: []byte("Some value"),
},
Database: "prefix-test",
Table: "cities",
},
{
Record: &store.Record{
Key: "Helsinki",
Value: []byte("Some value"),
},
Database: "prefix-test",
Table: "cities",
},
{
Record: &store.Record{
Key: "testKeytest",
Value: []byte("Some value"),
},
Table: "some_table",
},
{
Record: &store.Record{
Key: "testSecondtest",
Value: []byte("Some value"),
},
Table: "some_table",
},
{
Record: &store.Record{
Key: "lalala",
Value: []byte("Some value"),
},
Table: "some_table",
},
{
Record: &store.Record{
Key: "testAnothertest",
Value: []byte("Some value"),
},
},
{
Record: &store.Record{
Key: "FobiddenCharactersAreAllowed:|@..+",
Value: []byte("data no matter"),
},
},
}
)

View File

@@ -174,6 +174,12 @@ func (c *Client) ReadDir(path string) ([]os.FileInfo, error) {
err := c.propfind(path, false,
`<d:propfind xmlns:d='DAV:'>
<d:prop>
<d:displayname/>
<d:resourcetype/>
<d:getcontentlength/>
<d:getcontenttype/>
<d:getetag/>
<d:getlastmodified/>
</d:prop>
</d:propfind>`,
&response{},
@@ -220,6 +226,12 @@ func (c *Client) Stat(path string) (os.FileInfo, error) {
err := c.propfind(path, true,
`<d:propfind xmlns:d='DAV:'>
<d:prop>
<d:displayname/>
<d:resourcetype/>
<d:getcontentlength/>
<d:getcontenttype/>
<d:getetag/>
<d:getlastmodified/>
</d:prop>
</d:propfind>`,
&response{},

11
vendor/modules.txt vendored
View File

@@ -324,6 +324,9 @@ github.com/coreos/go-semver/semver
## explicit; go 1.12
github.com/coreos/go-systemd/v22/dbus
github.com/coreos/go-systemd/v22/journal
# github.com/cornelk/hashmap v1.0.8
## explicit; go 1.19
github.com/cornelk/hashmap
# github.com/cpuguy83/go-md2man/v2 v2.0.3
## explicit; go 1.11
github.com/cpuguy83/go-md2man/v2/md2man
@@ -359,8 +362,8 @@ github.com/cs3org/go-cs3apis/cs3/storage/provider/v1beta1
github.com/cs3org/go-cs3apis/cs3/storage/registry/v1beta1
github.com/cs3org/go-cs3apis/cs3/tx/v1beta1
github.com/cs3org/go-cs3apis/cs3/types/v1beta1
# github.com/cs3org/reva/v2 v2.16.1-0.20231212124908-ab6ed782de28
## explicit; go 1.20
# github.com/cs3org/reva/v2 v2.17.0
## explicit; go 1.21
github.com/cs3org/reva/v2/cmd/revad/internal/grace
github.com/cs3org/reva/v2/cmd/revad/runtime
github.com/cs3org/reva/v2/internal/grpc/interceptors/appctx
@@ -937,6 +940,9 @@ github.com/go-micro/plugins/v4/server/http
# github.com/go-micro/plugins/v4/store/nats-js v1.2.1-0.20230807070816-bc05fb076ce7 => github.com/kobergj/plugins/v4/store/nats-js v1.2.1-0.20231020092801-9463c820c19a
## explicit; go 1.17
github.com/go-micro/plugins/v4/store/nats-js
# github.com/go-micro/plugins/v4/store/nats-js-kv v0.0.0-00010101000000-000000000000 => github.com/kobergj/plugins/v4/store/nats-js-kv v0.0.0-20231207143248-4d424e3ae348
## explicit; go 1.21
github.com/go-micro/plugins/v4/store/nats-js-kv
# github.com/go-micro/plugins/v4/store/redis v1.2.1-0.20230510195111-07cd57e1bc9d
## explicit; go 1.17
github.com/go-micro/plugins/v4/store/redis
@@ -2292,3 +2298,4 @@ stash.kopano.io/kgol/oidc-go
## explicit; go 1.13
stash.kopano.io/kgol/rndm
# github.com/go-micro/plugins/v4/store/nats-js => github.com/kobergj/plugins/v4/store/nats-js v1.2.1-0.20231020092801-9463c820c19a
# github.com/go-micro/plugins/v4/store/nats-js-kv => github.com/kobergj/plugins/v4/store/nats-js-kv v0.0.0-20231207143248-4d424e3ae348