We are using babel-plugin-transform-inline-environment-variables which
replaces `process.env.FOO` with the value of the `FOO` environment
variable at compile time.
However, due to our pipeline we end with something like:
```js
var NAME = 'NOMS_VERSION_NEXT';
process.env[NAME]
```
which does not get replaced in development mode. If it was a const the
transformer could replace it but var bindings can change.
The old strategy for writing values was to recursively encode them,
putting the resulting chunks into a BatchStore from the bottom up as
they were generated.
The new strategy tries to keep chunks from the same 'level' of a
graph together by caching chunks as they're encoded and only writing
them once they're referenced by some other value. When a collection
is written, the graph representing it is encoded recursively, and
chunks are generated bottom-up. The new strategy should, in practice,
mean that the children of a given parent node in this graph will be
cached until that parent gets written, and then they'll get written
all at once.
Prior to this patch, ValueStore only kept a record of where referenced Refs originally reside in from chunks read from the server. This ignored the case of a client doing subsequent commits with the same ValueStore (for example, writing multiple states of a map). This was resulting in the server being forced to load a ton of chunks to validate.
Adds the ability for SequenceCursors to eagerly load all child sequences when the first child is requested.
The effect is for uses where noms tends to forward scan, it will read in batches of ~150 chunks (e.g. ~512k) rather than one chunk at a time.
On my MBP, this improves the raw blob read perf over http from ~60K/s to ~5MB/s.
When we read a chunk and create structs we were validating that the
struct was of the type it claimed to be.
This now, no longer does that validation, which matches Go.
Remove validation/normalization of union order and struct field order as we decode a chunk into a type.
Instead the validation happens in ValidatingBatchSink.
We still normalize the union order when a struct type is created directly (not from a chunk) using makeStructType.
The motivation for this change is that computing the OID (order ID) is expensive and it used to be a O(n^2) since we kept recomputing it as we traversed the type hierarchy.
Towards #2836
This is a side-by-side port, taking inspiration from the old dataspec.go
code. Notably:
- LDB support has been added in Go. It wasn't needed in JS.
- There is an Href() method on Spec now.
- Go now handles IPV6.
- Go no longer treats access_token specially.
- Go now has Pin.
- I found some issues in the JS while doing this, I'll fix later.
I've also updated the config code to use the new API so that basically
all the Go samples use the code, even if they don't really change.
ValueStore caches Values that are read out of it, but it doesn't
do the same for Values that are written. This is because we expect
that reading Values shortly after writing them is an uncommon usage
pattern, and because the Chunks that make up novel Values are
generally efficiently retrievable from the BatchStore that backs
a ValueStore. The problem discovered in issue #2802 is that ValueStore
caches non-existence as well as existence of read Values. So, reading
a Value that doesn't exist in the DB would result in the ValueStore
permanently returning nil for that Value -- even if you then go and
write it to the DB.
This patch drops the cache entry for a Value whenever it's written.
Fixes#2802
The big change here is adding a new Spec class in spec.js. This replaces
DatabaseSpec/DatasetSpec/PathSpec in specs.js, but I'm leaving those in
and moving code over in a later patch. For now, only photos UI.
The photos UI change is to plumb through the authorization token through
the Spec code. For now, it's reading it from a URL parameter, but soon
I'll make it session based (probably localStorage).
The demo-server change is to add the Authorization header into CORS.