Adds the ability to stream individual chunks requested via GetMany() back to caller.
Removes readAmpThresh and maxReadSize. Lowers the S3ReadBlockSize to 512k.
The new spec is a URI, akin to what we use for HTTP It allows the
specification of a DynamoDB table, an S3 bucket, a database ID, and a
dataset ID: aws://table-name:bucket-name/database::dataset
The bucket name is optional and, if not provided, Noms will use a
ChunkStore implementation backed only by DynamoDB.
NBS benefits from related chunks being near one another. Initially,
let's use write-order as a proxy for "related".
This patch contains a pretty heinous hack to allow sync to continue
putting chunks into httpBatchStore top-down without breaking
server-side validation. Work to fix this is tracked in #2982
This patch fixes#2968, at least for now
* Introduces PullWithFlush() to allow noms sync to explicitly
pull chunks over and flush directly after. This allows UpdateRoot
to behave as before.
Also clears out all the legacy batch-put machinery. Now, Flush()
just directly calls sendWriteRequests().
GetMany() calls can now be serviced by <= N goroutines, where N is the number of physical reads the request in broken down into.
This patch also adds a maxReadSize param to the code which decides how to break chunk reads into physical reads, and sets the s3 blockSize to 5MB, which experimentally resulted to lower total latency.
Lastly, some small refactors.
This is a potentially breaking change!
Before this change we required all the fields in a Go struct to be
present in the Noms struct when we unmarshal the Noms struct onto the
Go struct. This is no longer the case, which means that all fields in
the Go struct that are present in the Noms struct will be copied over.
This also means that `omitempty` is useless in Unmarshal and it has been
removed.
This might break your code if expected to get errors when the field
names did not match!
Fixes#2971
This is a breaking change!
We used to create empty Go collections `[]int{}` when unmarshalling an
empty Noms collection onto a Go collection that was `nil`. Now we keep
the Go collection as `nil` which means that you will get `[]int(nil)`
for an empty Noms List.
Fixes#2969
Before we can defragment NBS stores, we need to understand how
fragmented they are. This tool provides a measure of fragmentation in
which optimal chunk-graph layout implies that ALL children of a given
parent can be read in one storage-layer operation (e.g. disk read, S3
transaction, etc).
Introduce a 'compactingChunkStore', which knows how to compact itself
in the background. It satisfies get/has requests from an in-memory
table until compaction is complete. Once compaction is done, it
destroys the in-memory table and switches over to using solely the
persistent table.
Fixes#2879
Add GetMany(), which most ChunkStores implement by repeated calls to their own Get(), but creates the opportunity for stores to optimize reads of larger blocks of potentially sequential chunks (e.g. NBS).
Add RemoteBatchStore getRefs endpoint support for calling GetMany() rather than Get()
Remove ReadThroughChunkStore which was dead code.
Adds the ability for SequenceCursors to eagerly load all child sequences when the first child is requested.
The effect is for uses where noms tends to forward scan, it will read in batches of ~150 chunks (e.g. ~512k) rather than one chunk at a time.
On my MBP, this improves the raw blob read perf over http from ~60K/s to ~5MB/s.