Added --invert argument to indicate column major order
Added --append argument to append imported data to current head of dataset
Added --limit-records to import data for limited number of rows
* Modifications to ipfs-chat and ipfs chunkstore
* Change ipfs paths to include directory where ipfs repo is stored.
* Rework ipfs-chat to create ipfs chunkstores manually rather than
relying on Spec.ForDataset. This enables creating two chunkstores
(one local and one network) using the same IpfsNode (ipfs repo).
* Create separate replicate function for daemon and mergeMessage
function for client to experiment with slightly different behaviors
for each.
* Re-organization of code to remove duplication.
The main points are:
* added event loop to process events synchronously
* more agressive about not re-processing msgs from other nodes
that we've already processed
* fixed bug in ipfs chunkstore HasMany()
* Add go-base58 library
This makes all but types.Type be backed by a []byte.
The motivation is to reduce the allocations and the work needed to be
done when we read parts of a value (especially prolly trees).
Towards #2270
This allows parsing all Noms values from the string representation
used by human readable encoding:
```
v, err := nomdl.Parse(vrw, `map {"abc": 42}`)
```
Fixes#1466
Tweaking the main loop that processes list entries to avoid some
map assignments, lookups, and allocations saves 15% or so, resulting
in an overall savings of about 1m on the 6m runtime of our test
workload (as run on my laptop).
Towards #3690
Takes the output of a CSV file imported as a List of Struct and
"inverts" it so that it's now a Struct of Lists.
Example:
List<Struct Row {
Base?: String,
DOLocationID?: String,
}>
becomes
Struct Columnar {
base: List<String>,
dolocationid: List<String>,
}
stretchr has fixed a bug with the -count flag. I could merge these
changes into attic-labs, but it's easier to just use strechr.
We forked stretchr a long time ago so that we didn't link in the HTTP
testing libraries into the noms binaries (because we were using d.Chk in
production code). The HTTP issue doesn't seem to happen anymore, even
though we're still using d.Chk.
* Add --lowercase option to map column names to lowercase struct names
By default, each column name maps to a struct field preserving the original case.
If --lowercase is specified the resulting struct fields will always be lowercase.
Introduce Sloppy - an estimating compression function for snappy - which allows for the rolling hash to better produce a given target chunk size after compression.