We need to fully map the noms types into graphql even though the mapping isn't useful, otherwise the endpoint may die when attempting to create a schema for types that aren't mapped.
This changes to use Jest. Benefits of jest is parallel tests and jest can run the minimal set of tests that are affected by a change (by looking at dependencies).
The main work was to disentangle our cyclic dependencies. To do this I had to remove some runtime assertions in encode value as well as expose the values of a struct without going through a struct mirror.
Panics on background goroutines take down the server. This patch
hacks in a mechanism to pipe failures during NBS tableReader.extract
back to the main goroutine so the server doesn't die on this failure
and I can diagnose it.
* First pass at compaction
The first cut at compaction blocks UpdateRoot() while it compacts n/2
tables down into a single, large table (where n == number of tables
named in the NBS manifest). It then attempts to update the manifest
with one referencing the compacted table, the novel tables from the
client, and the remaining upstream tables that were not compacted.
If the update fails, probably due to an optimistic lock failure, the
client drops the compacted table it just created, pulls in the tables
from the newly-discovered upstream manifest, and tries again.
Known flaws:
- may explode RAM (#3130)
- doesn't handle novel tables > max tables (#3142)
- may handle optimistic-lock-failures suboptimally (#3141)
Fixes#3132
Also, fixes#2944 because doing so simplifies some code.
It was using a file watcher which was causing us to run out of inodes
on our Jenkins build. Instead, use flow-copy-source which does the
same copying without a file wathcer (we no longer used the --watch flag
anyway).
DynamoDB provides eventual read-after-write consistency, unless you
ask explicitly for strong consistency. When committing over HTTP, the
code writes a bunch of Values and then attempts to UpdateRoot to point
to one of those novel chunks. The UpdateRoot call _must_ see the
result of the prior WriteValue, or it may fail.
Fixes#3084
* Fixes a long standing bug in which the RemoteBatchStore is accidentally caching all chunks
* ValueStore's value cache now stores `Promise<?Value>` so that concurrent `readValues` of the same value can share a single decoding
* Removes the debug-only chunk hash check which keeps tripping up perf investigations
This code initially panicked in this case, because there didn't used
to be a reasonable way that a caller might wind up trying to update
away from a Root that didn't match NomsBlockStore's internal
bookkeeping. Now, given the new Flush() behavior there is. So, just
return false and allow the caller to take appropriate action.
Towards #3089
This patch adds a static function which can walk graphs looking for (and diffing) two structs. It uses type information to avoid traversing sub-values which can't contain structs. It also uses a similar approach as sync to avoid visiting common sub-chunk-graphs.
The only thing that wants what we used to call the "best" diff
algorithm is the command-line tools. Non-interactive programs all want
the algorithm that finishes up fastest, which is top-down.
Fixes https://github.com/attic-labs/attic/issues/627
Readahead + NBS benefit greatly when "related" Chunks are close to
each other. The current code did a good job of writing siblings in the
Chunk graph next to each other, but "cousins" (that is, children whose
parents are siblings) might wind up spread quite far apart. This
patch makes WriteValue hold onto novel Chunks until it sees a
_grandparent_ come through the pipeline. All of that Chunk's queued
grandchildren will be Put at that time.
Additionally, ValueStore.Flush() now takes a Hash and flushes all
Chunks that are reachable from the Chunk with that Hash, as opposed
to simply flushing all Chunks to the BatchStore. This means that
there's now no supported way to write orphaned Chunks/Values to a
Database.
Fixes#3051
* More logging for TestStreamingMap2
Since the head of each dataset can have an arbitrarily complex
type, type accretion leads the Datasets map at the root of the
DB to become very large. This type info isn't really very useful
at that level either. So, get rid of it by making this map be
from String -> Ref<Value>.
Fixes#2869