From aaa66ea86f370acd9994143b838807437afebb66 Mon Sep 17 00:00:00 2001 From: Rafael Weinstein Date: Mon, 23 Jan 2017 16:24:43 -0800 Subject: [PATCH] Update README.md --- go/nbs/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/go/nbs/README.md b/go/nbs/README.md index 24bf59b117..effef34548 100644 --- a/go/nbs/README.md +++ b/go/nbs/README.md @@ -8,9 +8,9 @@ An experimental storage layer for [noms](https://github.com/attic-labs/noms). - Writers need not worry about re-writing duplicate chunks. NBS will efficiently detect and drop (most) duplicates. # Status -NBS is more-or-less "alpha". There's still work we want to do, but it basically works. The most obvious things that aren't implemented are Compaction and GC. In particular, there's currently no upper bound on the set of "tables" which comprise a store and now way to reduce the number. +NBS is more-or-less "alpha". There's still work we want to do, but it basically works. The most obvious things that aren't implemented are Compaction and GC. In particular, there's currently no upper bound on the set of "tables" which comprise a store and no way to reduce the number. -However, there are two full "back-ends", one for storage on a file-system and one for storage in AWS. The later requires a dynamo table and an s3 bucket. +There are two full "back-ends", one for storage on a file-system and one for storage in AWS. The later requires a dynamo table and an s3 bucket. For the file back-end, perf is substantially better than LevelDB for two reasons (1) LDB does quite alot of compaction which burns IO but doesn't benenfit noms at all. (2) NBS locates related chunks together and thus reading data from a NBS store can be done quite alot faster (on a current Macbook Pro, you can read large blobs at 100s of MB/s, whereas LDB tops out at about 75 MB/s).