* Modifications to ipfs-chat and ipfs chunkstore
* Change ipfs paths to include directory where ipfs repo is stored.
* Rework ipfs-chat to create ipfs chunkstores manually rather than
relying on Spec.ForDataset. This enables creating two chunkstores
(one local and one network) using the same IpfsNode (ipfs repo).
* Create separate replicate function for daemon and mergeMessage
function for client to experiment with slightly different behaviors
for each.
* Re-organization of code to remove duplication.
The main points are:
* added event loop to process events synchronously
* more agressive about not re-processing msgs from other nodes
that we've already processed
* fixed bug in ipfs chunkstore HasMany()
* Add go-base58 library
This makes all but types.Type be backed by a []byte.
The motivation is to reduce the allocations and the work needed to be
done when we read parts of a value (especially prolly trees).
Towards #2270
This allows parsing all Noms values from the string representation
used by human readable encoding:
```
v, err := nomdl.Parse(vrw, `map {"abc": 42}`)
```
Fixes#1466
Tweaking the main loop that processes list entries to avoid some
map assignments, lookups, and allocations saves 15% or so, resulting
in an overall savings of about 1m on the 6m runtime of our test
workload (as run on my laptop).
Towards #3690
Takes the output of a CSV file imported as a List of Struct and
"inverts" it so that it's now a Struct of Lists.
Example:
List<Struct Row {
Base?: String,
DOLocationID?: String,
}>
becomes
Struct Columnar {
base: List<String>,
dolocationid: List<String>,
}
stretchr has fixed a bug with the -count flag. I could merge these
changes into attic-labs, but it's easier to just use strechr.
We forked stretchr a long time ago so that we didn't link in the HTTP
testing libraries into the noms binaries (because we were using d.Chk in
production code). The HTTP issue doesn't seem to happen anymore, even
though we're still using d.Chk.
* Add --lowercase option to map column names to lowercase struct names
By default, each column name maps to a struct field preserving the original case.
If --lowercase is specified the resulting struct fields will always be lowercase.
Introduce Sloppy - an estimating compression function for snappy - which allows for the rolling hash to better produce a given target chunk size after compression.
When we added GetMany and HasMany, we didn't realize that requests
could then be larger than the allowable HTTP form size. This patch
makes the body of getRefs and hasRefs be serialized as binary instead,
which addresses this issue and actually makes the request body more
compact.
Fixes#3589
* use kingpin for help and new commands, set up dummy command for noms blob
* document existing commands using kingpin
* remove noms-get and noms-set in favor of new noms blob command
* normalize bool flags in tests, remove redundant cases that kingpin now handles
* add kingpin to vendor files
* make profile flags global
* move --verbose and --quiet to global flags
Right now, the only kinds of Commits that the server will retry are those
to different datasets. That is, if another client concurrently landed a change
to some other dataset, and that is the only thing that causes your Commit
attempt to fail, the server will retry.
Fixes#3582
In an NBS world, bulk 'has' checks are waaaay cheaper than they used
to be. In light of this, we can toss out the complex logic we were
using in Pull() -- which basically existed for no reason other than to
avoid doing 'has' checks. Now, the code basically just descends down a
tree of chunks breadth-first, using HasMany() at each level to figure
out which chunks are not yet in the sink all at once, and GetMany() to
pull them from the source in bulk.
Fixes#3182, Towards #3384
Used to be that an NBS table was named by hashing the hashes
of every chunk present in the table, in hash order. That means
that to generate the name of a table you'd need to iterate
the prefix map and load every associated suffix. That would
be expensive when e.g. compacting multiple tables. This is
waaay cheaper and only slightly more likely to wind up with a
name collision.
Toward #3411
It's important that MemoryStore (and, by extension TestStore)
correctly implement the new ChunkStore semantics before we go
shifting around the Flush semantics like we want to do in #3404
In order to make this a reality, I introduced a "persistence"
layer for MemoryStore called MemoryStorage, which can vend
MemoryStoreView objects that represent a snapshot of the
persistent storage and implement the ChunkStore contract.
Fixes#3400
Removed Rebase() in HandleRootGet, and added ChunkStore
tests to validate the new Put behavior more fully
BatchStore is dead, long live ChunkStore! Merging these two required
some modification of the old ChunkStore contract to make it more
BatchStore-like in places, most specifically around Root(), Put() and
PutMany().
The first big change is that Root() now returns a cached value for the
root hash of the Store. This is how NBS worked already, so the more
interesting change here is the addition of Rebase(), which loads the
latest persistent root. Any chunks that appeared in backing storage
since the ChunkStore was opened (or last rebased) also become
visible.
UpdateRoot() has been replaced with Commit(), because UpdateRoot() was
ALREADY doing the work of persisting novel chunks as well as moving
the persisted root hash of the ChunkStore in both NBS and
httpBatchStore. This name, and the new contract (essentially Flush() +
UpdateRoot()), is a more accurate representation of what's going on.
As for Put(), the former contract for claimed to block until the chunk
was durable. That's no longer the case. Indeed, NBS was already not
fulfilling this contract. The new contract reflects this, asserting
that novel chunks aren't persisted until a Flush() or Commit() --
which has replaced UpdateRoot(). Novel chunks are immediately visible
to Get and Has calls, however.
In addition to this larger change, there are also some tweaks to
ValueStore and Database. ValueStore.Flush() no longer takes a hash,
and instead just persists any and all Chunks it has buffered since the
last time anyone called Flush(). Database.Close() used to have some
side effects where it persisted Chunks belonging to any Values the
caller had written -- that is no longer so. Values written to a
Database only become persistent upon a Commit-like operation (Commit,
CommitValue, FastForward, SetHead, or Delete).
/******** New ChunkStore interface ********/
type ChunkStore interface {
ChunkSource
RootTracker
}
// RootTracker allows querying and management of the root of an entire tree of
// references. The "root" is the single mutable variable in a ChunkStore. It
// can store any hash, but it is typically used by higher layers (such as
// Database) to store a hash to a value that represents the current state and
// entire history of a database.
type RootTracker interface {
// Rebase brings this RootTracker into sync with the persistent storage's
// current root.
Rebase()
// Root returns the currently cached root value.
Root() hash.Hash
// Commit atomically attempts to persist all novel Chunks and update the
// persisted root hash from last to current. If last doesn't match the
// root in persistent storage, returns false.
// TODO: is last now redundant? Maybe this should just try to update from
// the cached root to current?
// TODO: Does having a separate RootTracker make sense anymore? BUG 3402
Commit(current, last hash.Hash) bool
}
// ChunkSource is a place chunks live.
type ChunkSource interface {
// Get the Chunk for the value of the hash in the store. If the hash is
// absent from the store nil is returned.
Get(h hash.Hash) Chunk
// GetMany gets the Chunks with |hashes| from the store. On return,
// |foundChunks| will have been fully sent all chunks which have been
// found. Any non-present chunks will silently be ignored.
GetMany(hashes hash.HashSet, foundChunks chan *Chunk)
// Returns true iff the value at the address |h| is contained in the
// source
Has(h hash.Hash) bool
// Returns a new HashSet containing any members of |hashes| that are
// present in the source.
HasMany(hashes hash.HashSet) (present hash.HashSet)
// Put caches c in the ChunkSink. Upon return, c must be visible to
// subsequent Get and Has calls, but must not be persistent until a call
// to Flush(). Put may be called concurrently with other calls to Put(),
// PutMany(), Get(), GetMany(), Has() and HasMany().
Put(c Chunk)
// PutMany caches chunks in the ChunkSink. Upon return, all members of
// chunks must be visible to subsequent Get and Has calls, but must not be
// persistent until a call to Flush(). PutMany may be called concurrently
// with other calls to Put(), PutMany(), Get(), GetMany(), Has() and
// HasMany().
PutMany(chunks []Chunk)
// Returns the NomsVersion with which this ChunkSource is compatible.
Version() string
// On return, any previously Put chunks must be durable. It is not safe to
// call Flush() concurrently with Put() or PutMany().
Flush()
io.Closer
}
Fixes#2945