When we have a cyclic type we ended up getting the non complete type out
of the map.
Use graphql.FieldsThunk which is designed to allow recursive types.
We're getting some TCP hangups when trying to talk to S3.
We have an S3 reading rate limiter that's supposed to
prevent issues like this, so the question is whether that's
set too high. Rather than just turning the knob and seeing
if things are OK, this patch adds some logging to verify
that we're actually hitting the rate limiter.
The names of GraphQL types needs to be globally unique. We therefore
name struct types as NAME_HASH (where hash is the first 6 chars of
the noms hash).
Also, make sure that we always map the noms type to the same GraphQL
type, even if the boxedIfScalar is true.
Fixes#3161
NBS is stable enough that we've made it the default store for command
line tools, and the go-to store for tests that require temporary, but
persistent, storage.
We intend to remove support for LevelDB-backed chunk storage
completely ASAP. This patch removes all usage of LevelDBStore from
noms.git, but doesn't remove LevelDBStore _just_ yet as there are
still some dependencies on it elsewhere.
Toward #3127
I missed this in the compaction patch :-/ I caught it in another
test when the code panic'd while trying to write a manifest with
an empty table in it. So at least it got caught there?
BUG 3156 is caused by the compaction code trying to estimate the
maximum possible table size for chunk data pulled from a bunch of
existing tables. The problem was that we only had _compressed_ data
lengths for the chunks in existing tables, so we were drastically
underestimating the worst-case space that we might need during
compaction.
The fix is to have tables store the total number of _uncompressed_
bytes that were inserted, so that the compaction code can use this to
get the right estimate when putting together a bunch of tables.
Fixes#3156
When we do a writeValue we do a POST with a request body of some binary
data. Some clients (RN) do not set the content-type header which then
leads to failure on our server.
Though Raf and I can't figure out how, it's clear that the method we
initially used for calculating the max amount of space for
snappy-compressed chunk data was incorrect. That's the root cause of
of all the chunks to be written and summing the snappy.MaxEncodedLen()
for each.
Fixes#3156
Apparently, there's some issue running demo-server with --verbose
in prod, so we don't do it. This means that the logging info I
added isn't showing up. Change the logging code to use fmt.Fprintf()
Also, add unit test for errata functionality.
Towards #3156
Add support for unions containing scalar values by "boxing" those scalars in that context. Also add a hash field to Struct so that empty structs have at least one field
There's some case that causes chunks that compress to more than about
55k (we think these are quite big, chunks that are many hundreds of K
in size) not to wind up correctly inserted into tables. It looks like
the snappy library believes the buffer we've allocated may not be
large enough, so it allocates its own space and this screws us up.
This patch changes two things:
1) The CRC in the NBS format is now the CRC of the _compressed_ data
2) Such chunks will be manually copied into the table, so they won't
be missing anymore
Also, when the code detects a case where the snappy library decided to
allocate its own storage, it saves the uncompressed data off to the
side, so that it can be pushed to durable storage. Such chunks are
stored on disk or in S3 named like "<chunk-hash>-errata", and logging
is dumped out so we can figure out which tables were supposed to
contain these chunks.
Towards #3156
We need to fully map the noms types into graphql even though the mapping isn't useful, otherwise the endpoint may die when attempting to create a schema for types that aren't mapped.
This changes to use Jest. Benefits of jest is parallel tests and jest can run the minimal set of tests that are affected by a change (by looking at dependencies).
The main work was to disentangle our cyclic dependencies. To do this I had to remove some runtime assertions in encode value as well as expose the values of a struct without going through a struct mirror.
Panics on background goroutines take down the server. This patch
hacks in a mechanism to pipe failures during NBS tableReader.extract
back to the main goroutine so the server doesn't die on this failure
and I can diagnose it.