Get rid of the notion of a special 'Empty' Commit value, which means
that a freshly created Dataset or DataStore will have no Head at
all. To allow callers to tolerate this, we provide the MaybeHead()
method for them to use when they're unsure about whether a given
Data{set,Store} has ever been committed to.
To ease the API impacts of this change, we've also modified Commit()
to take a types.Value and handle creating a Commit struct holding that
Value and descending directly from the current Head.
Callers who wish to provide alternate parents can use CommitWithParents()
This patch changes the "Head" of a DataStore to be a single Commit,
as opposed to a SetOfCommit. This has several consequences:
1) Commit() will only accept Commits that are descendants of the
current Head.
2) Calls to Commit() can now fail, so the method now has an additional
'ok' return value that callers must check. Whether ok is true or
false, the DataStore struct returned is the right one to use for
subsequent calls to Commit() -- retries or otherwise.
3) This rolls up the stack, so Dataset.Commit() can now fail as well,
and similar logic applies.
4) sync.SetNewHeads() also behaves similarly, since it can also now fail.
5) Examples now die on Commit() failures.
Also, removes the /dataset endpoint from server. It's deprecated, and this
patch would have required updating it, so instead just delete it.
Towards issue #147
Introduces the notion of debug 'expectations', analogous to the
'checks' that we already have. d.Exp provides the same API as d.Chk,
but expectation violations can be caught using d.Try().
Toward issue #176
Also, factor out a separate NewDataStoreWithRootTracker() since
the common case is to use the same value for both the ChunkStore
and the RootTracker.
Fixes#134
In addition to putting in the 'pull' tool that I forgot to add in my initial PR,
I added an extra unit test to cover a case that we found to be buggy, as well
as addressing some comments by aa and arv.
1) Switched to io.Copy in CopyChunks
2) Added NewFlagsWithPrefix()
3) Cleaned up some error reporting
This initial implementation requires that both the "remote" and local
ChunkStores be accessible by the machine running the pull command.
I took an initial pass at splitting up the functions so that, e.g.,
calculating which refs are needed could be done on an actual remote
machine, and we can add a chunk copying routine that gets data from
the network or something.
Towards issue #81