diff --git a/.gitignore b/.gitignore index 3c30fef3..b625d498 100644 --- a/.gitignore +++ b/.gitignore @@ -13,3 +13,4 @@ kmod/trace kmod/vm-out linux-*.tar.gz eggs-integrationtest.* +.go-cache \ No newline at end of file diff --git a/README.md b/README.md index da834c45..16bf85ae 100644 --- a/README.md +++ b/README.md @@ -1,35 +1,31 @@ -# EggsFS +# TernFS -A distributed file system. +A distributed file system. For a high-level description of TernFS, see its [blog post](TODO insert link). This document provides a more bare-bones overview and an introduction to the codebase. ## Goals -The target use case for EggsFS is the kind of machine learning we do at XTX: reading and writing large immutable files. With "immutable" we mean files that do not need modifying after they are first created. With "large" we mean that most of the storage space will be taken up by files bigger than a few MBs. +The target use case for TernFS is the kind of machine learning we do at XTX: reading and writing large immutable files. With "immutable" we mean files that do not need modifying after they are first created. With "large" we mean that most of the storage space will be taken up by files bigger than a few MBs. -We don't expect new directories to be created often, and files (or directories) to be moved between directories often. In terms of numbers, we expect the upper bound for EggsFS to roughly be the upper bound for the data we're planning for a single data center: +We don't expect new directories to be created often, and files (or directories) to be moved between directories often. In terms of numbers, we expect the upper bound for TernFS to roughly be the upper bound for the data we're planning for a single data center: - 10EB of logical file storage (i.e. if you sum all file sizes = 10EB) - 1 trillion files -- average ~10MB file size - 100 billion directories -- average ~10 files per directory -- 100,000 clients +- 1 million clients We want to drive the filesystem with commodity hardware and Ethernet networking. We want the system to be robust in various ways: -* Witnessing half-written files should be impossible -- a file is fully written by the writer or not readable by other clients (TODO reference link) +* Witnessing half-written files should be impossible -- a file is fully written by the writer or not readable by other clients * Power loss or similar failure of storage or metadata nodes should not result in a corrupted filesystems (be it metadata or filesystem corruption) * Corrupted reads due to hard drives bitrot should be exceedingly unlikely - * TODO reference to CRC32C strategy for spans/blocks - * TODO some talking about RocksDB's CRCs () * Data loss should be exceedingly unlikely, unless we suffer a datacenter-wide catastrophic event (fire, flooding, datacenter-wide vibration, etc) - * TODO link to precise storage strategy to make this more precise - * TODO some talking about future multi data center plans * The filesystem should keep working through maintenance or failure of metadata or storage nodes -Moreover, we want the filesystem to be usable as a "normal" filesystem (although _not_ a POSIX compliant filesystem) as opposed to a blob storage with some higher level API a-la AWS S3. This is mostly due to the cost we would face if we had to upgrade all the current users of the compute cluster to a non-file API. +We also want to be able to restore deleted files or directories, using a configurable "permanent deletion" policy. -Finally, we want to be able to restore deleted files or directories, using a configurable "permanent deletion" policy. +Finally, TernFS can be replicated to multiple data centres to make it resilient to data centre loss, with each data centre storing the whole TernFS data set. ## Components @@ -40,19 +36,19 @@ TODO decorate list below with links drilling down on specific concepts. * **servers** * **shuckle** * 1 logical instance - * `eggsshuckle`, Go binary + * `ternshuckle`, Go binary * state currently persisted through SQLite (1 physical instance), should move to a Galera cluster soon (see #41) * TCP -- both bincode and HTTP - * stores metadata about a specific EggsFS deployment + * stores metadata about a specific TernFS deployment * shard/cdc addresses * block services addressea and storage statistics * latency histograms - * serves web UI (e.g. ) + * serves web UI * **filesystem data** * **metadata** * **shard** * 256 logical instances - * `eggsshard`, C++ binary + * `ternshard`, C++ binary * stores all metadata for the filesystem * file attributes (size, mtime, atime) * directory attributes (mtime) @@ -64,7 +60,7 @@ TODO decorate list below with links drilling down on specific concepts. * communicates with shuckle to fetch block services, register itself, insert statistics * **CDC** * 1 logical instance - * `eggscdc`, C++ binary + * `terncdc`, C++ binary * coordinates actions which span multiple directories * create directory * remove directory @@ -81,7 +77,7 @@ TODO decorate list below with links drilling down on specific concepts. * up to 1 million logical instances * 1 logical instance = 1 disk * 1 physical instance handles ~100 logical instances (since there are ~100 disks per server) - * `eggsblocks`, Go binary (for now, will be rewritten in C++ eventually) + * `ternblocks`, Go binary (for now, will be rewritten in C++ eventually) * stores "blocks", which are blobs of data which contain file contents * each file is split in many blocks stored on many block services (so that if up to 4 block services fail we can always recover files) * single instances are not redundant, the redundancy is handled by spreading files over many instances so that we can recover their contents @@ -91,28 +87,28 @@ TODO decorate list below with links drilling down on specific concepts. * communicates with shuckle to register itself and to update information about free space, number of blocks, etc. * **clients**, these all talk to all of the servers * **cli** - * `eggscli`, Go binary + * `terncli`, Go binary * toolkit to perform various tasks, most notably - * migrating contents of dead block services (`eggscli migrate`) - * moving around blocks so that files are stored efficiently (`eggscli defrag`, currently WIP, see #50) + * migrating contents of dead block services (`terncli migrate`) + * moving around blocks so that files are stored efficiently (`terncli defrag`, currently WIP, see #50) * **kmod** - * `eggsfs.ko`, C Linux kernel module + * `ternfs.ko`, C Linux kernel module * kernel module implementing `mount -t eggsfs ...` * the most fun and pleasant part of the codebase * **FUSE** - * `eggsfuse`, Go FUSE implementation of an eggsfs client + * `ternfuse`, Go FUSE implementation of a TernFS client * much slower but should be almost fully functional (there are some limitations concerning when a file gets flushed) * **daemons**, these also talk to all of the servers * **GC** - * `eggsgc`, Go binary + * `terngc`, Go binary * permanently deletes expired snapshots (i.e. deleted but not yet purged data) * cleans up all blocks for permanently deleted files * **scrubber** * TODO see #32 * goes around detecting and repairing bitrot * **additional tools** - * `eggsrun`, a tool to quickly spin up an EggsFS instance - * `eggstests`, runs some integration tests + * `ternrun`, a tool to quickly spin up a TernFS instance + * `terntests`, runs some integration tests ## Building @@ -135,13 +131,13 @@ Will run the integration tests as CI would (inside a docker image). You can also To work with the qemu kmod tests you'll first need to download the base Ubuntu image we use for testing: ``` -% wget 'https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img' +% wget -P kmod 'https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img' ``` Then you can run the CI tests in kmod like so: ``` -% ./ci.py --kmod --short --prepare-image=/full/path/to/focal-server-cloudimg-amd64.img --leader-only +% ./ci.py --kmod --short --prepare-image=kmod/focal-server-cloudimg-amd64.img --leader-only ``` The tests redirect dmesg output to `kmod/dmesg`, event tracing output to `kmod/trace`, and the full test log to `kmod/test-out`. @@ -152,18 +148,18 @@ You can also ssh into the qemu which is running the tests with % ssh -p 2223 -i kmod/image-key fmazzol@localhost ``` -Note that the kmod tests are very long (~1hr). Usually when developing the kernel module it's best to use `./kmod/restartsession.sh` to be dropped into qemu, and then run specific tests using `eggstests`. +Note that the kmod tests are very long (~1hr). Usually when developing the kernel module it's best to use `./kmod/restartsession.sh` to be dropped into qemu, and then run specific tests using `terntests`. -However when merging code modifying eggsfs internals it's very important for the kmod tests to pass as well as the normal integration tests. This is due to the fact that the qemu environment is generally very different from the non-qemu env, which means that sometimes it'll surface issues that the non-qemu environment won't. +However when merging code modifying TernFS internals it's very important for the kmod tests to pass as well as the normal integration tests. This is due to the fact that the qemu environment is generally very different from the non-qemu env, which means that sometimes it'll surface issues that the non-qemu environment won't. -## Playing with a local EggsFS instance +## Playing with a local TernFS instance ``` -% cd go/eggsrun +% cd go/ternrun % go run . -data-dir ``` -The above will run all the processes needed to run EggsFS. This includes: +The above will run all the processes needed to run TernFS. This includes: * 256 metadata shards; * 1 cross directory coordinator (CDC) @@ -172,6 +168,8 @@ The above will run all the processes needed to run EggsFS. This includes: A multitude of directories to persist the whole thing will appear in ``. The filesystem will also be mounted using FUSE under `/fuse/mnt`. +TODO add instruction on how to run a multi-datacenter setup. + ## Building the Kernel module ``` @@ -185,7 +183,7 @@ A multitude of directories to persist the whole thing will appear in ` Most of the codebase is understandable by VS Code/LSP: -* Code in `go/` just works out of the box with the [Go extension](https://code.visualstudio.com/docs/languages/go). I (fmazzol) open a separate VS Code window which specifically has the `eggsfs/go` directory open, since the Go extension doesn't seem to like working from a subdirectory. +* Code in `go/` just works out of the box with the [Go extension](https://code.visualstudio.com/docs/languages/go). I (fmazzol) open a separate VS Code window which specifically has the `ternfs/go` directory open, since the Go extension doesn't seem to like working from a subdirectory. * Code in `cpp/`: - Disable existing C++ integrations for VS Code (I don't remember which exact C++ extension caused me trouble -- something by Microsoft itself). - Install the [clangd extension](https://marketplace.visualstudio.com/items?itemName=llvm-vs-code-extensions.vscode-clangd). @@ -202,3 +200,7 @@ Most of the codebase is understandable by VS Code/LSP: - [Build the module](#building-the-kernel-module). - Generate `compile_commands.json` with `./kmod/gen_compile_commands.sh`. - New files should work automatically, but if things stop working, just re-bulid and re-generate `compile_commands.json`. + +## A note on naming + +TernFS was originally called EggsFS internally. This name quickly proved to be very poor due to the phonetic similarity to XFS, another notable filesystem. Therefore the filesystem was renamed to TernFS before open sourcing. However the old name lingers on in certain areas of the system that would have been hard to change, such as metric names. \ No newline at end of file diff --git a/build.sh b/build.sh index 718884d6..fc78ec54 100755 --- a/build.sh +++ b/build.sh @@ -19,18 +19,18 @@ ${PWD}/go/build.py # copy binaries binaries=( - cpp/build/$build_variant/shard/eggsshard - cpp/build/$build_variant/dbtools/eggsdbtools - cpp/build/$build_variant/cdc/eggscdc - cpp/build/$build_variant/ktools/eggsktools - go/eggsshuckle/eggsshuckle - go/eggsrun/eggsrun - go/eggsblocks/eggsblocks - go/eggsfuse/eggsfuse - go/eggscli/eggscli - go/eggsgc/eggsgc - go/eggstests/eggstests - go/eggsshuckleproxy/eggsshuckleproxy + cpp/build/$build_variant/shard/ternshard + cpp/build/$build_variant/dbtools/terndbtools + cpp/build/$build_variant/cdc/terncdc + cpp/build/$build_variant/ktools/ternktools + go/ternshuckle/ternshuckle + go/ternrun/ternrun + go/ternblocks/ternblocks + go/ternfuse/ternfuse + go/terncli/terncli + go/terngc/terngc + go/terntests/terntests + go/ternshuckleproxy/ternshuckleproxy ) for binary in "${binaries[@]}"; do diff --git a/cpp/CMakeLists.txt b/cpp/CMakeLists.txt index 9e84d164..69b72a60 100644 --- a/cpp/CMakeLists.txt +++ b/cpp/CMakeLists.txt @@ -5,7 +5,7 @@ include(ExternalProject) set(CMAKE_C_COMPILER clang) set(CMAKE_CXX_COMPILER clang++) -project(eggsfs) +project(ternfs) # Yes, this means that it won't work for multi-configuration stuff, which is fine for now. if(NOT CMAKE_BUILD_TYPE) @@ -34,10 +34,10 @@ add_compile_options("$<$>:-march=skylake;-mgfni>") # performance/debug stuff add_compile_options("$<$>:-O3>") -add_compile_options("$<$:-Og;-DEGGS_DEBUG>") +add_compile_options("$<$:-Og;-DTERN_DEBUG>") # We build the release build statically in Alpine -add_compile_options("$<$:-DEGGS_ALPINE>") +add_compile_options("$<$:-DTERN_ALPINE>") add_link_options("$<$:-static>") add_link_options("$<$>:-no-pie>") @@ -49,7 +49,7 @@ add_link_options("$<$:${SANITIZE_OPTIONS}>") # we only use jemalloc in release builds, alpine doesn't seem to # like jemalloc very much, and sanitizer etc works better without it if ("${CMAKE_BUILD_TYPE}" STREQUAL "release") - set(EGGSFS_JEMALLOC_LIBS "jemalloc") + set(TERNFS_JEMALLOC_LIBS "jemalloc") endif() include(thirdparty.cmake) diff --git a/cpp/cdc/CDC.cpp b/cpp/cdc/CDC.cpp index a0f64a75..d0aa1305 100644 --- a/cpp/cdc/CDC.cpp +++ b/cpp/cdc/CDC.cpp @@ -57,7 +57,7 @@ struct CDCShared { struct InFlightShardRequest { CDCTxnId txnId; // the txn id that requested this shard request - EggsTime sentAt; + TernTime sentAt; ShardId shid; }; @@ -66,7 +66,7 @@ struct InFlightCDCRequest { uint64_t lastSentRequestId; // if hasClient=false, the following is all garbage. uint64_t cdcRequestId; - EggsTime receivedAt; + TernTime receivedAt; IpPort clientAddr; CDCMessageKind kind; int sockIx; @@ -77,8 +77,8 @@ struct InFlightCDCRequest { // MISMATCHING_CREATION_TIME can happen if we generate a timeout // in CDC.cpp, but the edge was actually created, and when we // try to recreate it we get a bad creation time. -static bool innocuousShardError(EggsError err) { - return err == EggsError::NAME_NOT_FOUND || err == EggsError::EDGE_NOT_FOUND || err == EggsError::DIRECTORY_NOT_EMPTY || err == EggsError::MISMATCHING_CREATION_TIME; +static bool innocuousShardError(TernError err) { + return err == TernError::NAME_NOT_FOUND || err == TernError::EDGE_NOT_FOUND || err == TernError::DIRECTORY_NOT_EMPTY || err == TernError::MISMATCHING_CREATION_TIME; } // These can happen but should be rare. @@ -90,8 +90,8 @@ static bool innocuousShardError(EggsError err) { // after first transaction finished but before it got the response (or response got lost). // This will create a new transaction which can race with gc fully cleaning up the directory // (which can happen if it was empty). -static bool rareInnocuousShardError(EggsError err) { - return err == EggsError::DIRECTORY_HAS_OWNER || err == EggsError::DIRECTORY_NOT_FOUND; +static bool rareInnocuousShardError(TernError err) { + return err == TernError::DIRECTORY_HAS_OWNER || err == TernError::DIRECTORY_NOT_FOUND; } struct InFlightCDCRequestKey { @@ -123,7 +123,7 @@ private: using RequestsMap = std::unordered_map; RequestsMap _reqs; - std::map _pq; + std::map _pq; public: @@ -136,7 +136,7 @@ public: return *this; } - TimeIterator(const RequestsMap& reqs, const std::map& pq) : _reqs(reqs), _pq(pq), _it(_pq.begin()) {} + TimeIterator(const RequestsMap& reqs, const std::map& pq) : _reqs(reqs), _pq(pq), _it(_pq.begin()) {} bool operator==(const TimeIterator& other) const { return _it == other._it; } @@ -147,10 +147,10 @@ public: return TimeIterator{_reqs, _pq, _pq.end()}; } private: - TimeIterator(const RequestsMap& reqs, const std::map& pq, std::map::const_iterator it) : _reqs(reqs), _pq(pq), _it(it) {} + TimeIterator(const RequestsMap& reqs, const std::map& pq, std::map::const_iterator it) : _reqs(reqs), _pq(pq), _it(it) {} const RequestsMap& _reqs; - const std::map& _pq; - std::map::const_iterator _it; + const std::map& _pq; + std::map::const_iterator _it; }; void clear() { @@ -204,7 +204,7 @@ public: struct CDCReqInfo { uint64_t reqId; IpPort clientAddr; - EggsTime receivedAt; + TernTime receivedAt; int sockIx; }; @@ -305,7 +305,7 @@ public: // Timeout ShardRequests { - auto now = eggsNow(); + auto now = ternNow(); auto oldest = _inFlightShardReqs.oldest(); while (_updateSize() < MAX_UPDATE_SIZE && oldest != oldest.end()) { @@ -316,7 +316,7 @@ public: auto resp = _prepareCDCShardResp(requestId); ALWAYS_ASSERT(resp != nullptr); // must be there, we've just timed it out resp->checkPoint = 0; - resp->resp.setError() = EggsError::TIMEOUT; + resp->resp.setError() = TernError::TIMEOUT; _recordCDCShardResp(requestId, *resp); ++oldest; } @@ -375,7 +375,7 @@ public: if (_logsDB.isLeader()) { auto err = _logsDB.appendEntries(entries); - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); // we need to drop information about entries which might have been dropped due to append window being full bool foundLastInserted = false; for (auto it = entries.rbegin(); it != entries.rend(); ++it) { @@ -559,12 +559,12 @@ private: } void _recordCDCShardResp(uint64_t requestId, CDCShardResp& resp) { - auto err = resp.resp.kind() != ShardMessageKind::ERROR ? EggsError::NO_ERROR : resp.resp.getError(); + auto err = resp.resp.kind() != ShardMessageKind::ERROR ? TernError::NO_ERROR : resp.resp.getError(); _shared.shardErrors.add(err); - if (err == EggsError::NO_ERROR) { + if (err == TernError::NO_ERROR) { LOG_DEBUG(_env, "successfully parsed shard response %s with kind %s, process soon", requestId, resp.resp.kind()); return; - } else if (err == EggsError::TIMEOUT) { + } else if (err == TernError::TIMEOUT) { LOG_DEBUG(_env, "txn %s shard req %s, timed out", resp.txnId, requestId); } else if (innocuousShardError(err)) { LOG_DEBUG(_env, "txn %s shard req %s, finished with innocuous error %s", resp.txnId, requestId, err); @@ -665,7 +665,7 @@ private: } LOG_DEBUG(_env, "received request id %s, kind %s", cdcMsg.id, cdcMsg.body.kind()); - auto receivedAt = eggsNow(); + auto receivedAt = ternNow(); if (unlikely(cdcMsg.body.kind() == CDCMessageKind::CDC_SNAPSHOT)) { _processCDCSnapshotMessage(cdcMsg, msg); @@ -730,7 +730,7 @@ private: auto err = _shared.sharedDb.snapshot(_basePath +"/snapshot-" + std::to_string(msg.body.getCdcSnapshot().snapshotId)); CDCRespMsg respMsg; respMsg.id = msg.id; - if (err == EggsError::NO_ERROR) { + if (err == TernError::NO_ERROR) { respMsg.body.setCdcSnapshot(); } else { respMsg.body.setError() = err; @@ -754,8 +754,8 @@ private: // we need to send the response back to the client auto inFlight = _inFlightTxns.find(txnId); if (inFlight->second.hasClient) { - _shared.timingsTotal[(int)inFlight->second.kind].add(eggsNow() - inFlight->second.receivedAt); - _shared.errors[(int)inFlight->second.kind].add(resp.kind() != CDCMessageKind::ERROR ? EggsError::NO_ERROR : resp.getError()); + _shared.timingsTotal[(int)inFlight->second.kind].add(ternNow() - inFlight->second.receivedAt); + _shared.errors[(int)inFlight->second.kind].add(resp.kind() != CDCMessageKind::ERROR ? TernError::NO_ERROR : resp.getError()); CDCRespMsg respMsg; respMsg.id = inFlight->second.cdcRequestId; respMsg.body = std::move(resp); @@ -803,7 +803,7 @@ private: // Record the in-flight req _inFlightShardReqs.insert(shardReqMsg.id, InFlightShardRequest{ .txnId = txnId, - .sentAt = eggsNow(), + .sentAt = ternNow(), .shid = shardReq.shid, }); inFlightTxn->second.lastSentRequestId = shardReqMsg.id; @@ -814,7 +814,7 @@ private: if (unlikely(respMsg.body.kind() == CDCMessageKind::ERROR)) { auto err = respMsg.body.getError(); LOG_DEBUG(_env, "will send error %s to %s", err, clientAddr); - if (err != EggsError::DIRECTORY_NOT_EMPTY && err != EggsError::EDGE_NOT_FOUND && err != EggsError::MISMATCHING_CREATION_TIME) { + if (err != TernError::DIRECTORY_NOT_EMPTY && err != TernError::EDGE_NOT_FOUND && err != TernError::MISMATCHING_CREATION_TIME) { RAISE_ALERT(_env, "request %s of kind %s from client %s failed with err %s", respMsg.id, reqKind, clientAddr, err); } else { LOG_INFO(_env, "request %s of kind %s from client %s failed with err %s", respMsg.id, reqKind, clientAddr, err); @@ -869,7 +869,7 @@ public: } } if (badShard) { - EggsTime successfulIterationAt = 0; + TernTime successfulIterationAt = 0; _env.updateAlert(_alert, "Shard info is still not present in shuckle, will keep trying"); return false; } @@ -956,7 +956,7 @@ public: } }; -static void logsDBstatsToMetrics(struct MetricsBuilder& metricsBuilder, const LogsDBStats& stats, ReplicaId replicaId, EggsTime now) { +static void logsDBstatsToMetrics(struct MetricsBuilder& metricsBuilder, const LogsDBStats& stats, ReplicaId replicaId, TernTime now) { { metricsBuilder.measurement("eggsfs_cdc_logsdb"); metricsBuilder.tag("replica", replicaId); @@ -1089,7 +1089,7 @@ public: } else { _env.clearAlert(_updateSizeAlert); } - auto now = eggsNow(); + auto now = ternNow(); for (CDCMessageKind kind : allCDCMessageKind) { const ErrorCount& errs = _shared.errors[(int)kind]; for (int i = 0; i < errs.count.size(); i++) { @@ -1101,7 +1101,7 @@ public: if (i == 0) { _metricsBuilder.tag("error", "NO_ERROR"); } else { - _metricsBuilder.tag("error", (EggsError)i); + _metricsBuilder.tag("error", (TernError)i); } _metricsBuilder.fieldU64("count", count); _metricsBuilder.timestamp(now); @@ -1127,7 +1127,7 @@ public: if (i == 0) { _metricsBuilder.tag("error", "NO_ERROR"); } else { - _metricsBuilder.tag("error", (EggsError)i); + _metricsBuilder.tag("error", (TernError)i); } _metricsBuilder.fieldU64("count", count); _metricsBuilder.timestamp(now); diff --git a/cpp/cdc/CDCDB.cpp b/cpp/cdc/CDCDB.cpp index d1e684b3..4a93a40d 100644 --- a/cpp/cdc/CDCDB.cpp +++ b/cpp/cdc/CDCDB.cpp @@ -122,10 +122,10 @@ std::ostream& operator<<(std::ostream& out, const CDCLogEntry& x) { return out; } -inline bool createCurrentLockedEdgeRetry(EggsError err) { +inline bool createCurrentLockedEdgeRetry(TernError err) { return - err == EggsError::TIMEOUT || err == EggsError::MTIME_IS_TOO_RECENT || - err == EggsError::MORE_RECENT_SNAPSHOT_EDGE || err == EggsError::MORE_RECENT_CURRENT_EDGE; + err == TernError::TIMEOUT || err == TernError::MTIME_IS_TOO_RECENT || + err == TernError::MORE_RECENT_SNAPSHOT_EDGE || err == TernError::MORE_RECENT_CURRENT_EDGE; } static constexpr InodeId MOVE_DIRECTORY_LOCK = InodeId::FromU64Unchecked(1ull<<63); @@ -204,9 +204,9 @@ static DirectoriesNeedingLock directoriesNeedingLock(const CDCReqContainer& req) toLock.add(req.getCrossShardHardUnlinkFile().ownerId); break; case CDCMessageKind::ERROR: - throw EGGS_EXCEPTION("bad req type error"); + throw TERN_EXCEPTION("bad req type error"); default: - throw EGGS_EXCEPTION("bad req type %s", (uint8_t)req.kind()); + throw TERN_EXCEPTION("bad req type %s", (uint8_t)req.kind()); } return toLock; } @@ -253,9 +253,9 @@ struct StateMachineEnv { return finished.second; } - void finishWithError(EggsError err) { + void finishWithError(TernError err) { this->finished = true; - ALWAYS_ASSERT(err != EggsError::NO_ERROR); + ALWAYS_ASSERT(err != TernError::NO_ERROR); auto& errored = cdcStep.finishedTxns.emplace_back(); errored.first = txnId; errored.second.setError() = err; @@ -308,7 +308,7 @@ struct MakeDirectoryStateMachine { case MAKE_DIRECTORY_CREATE_LOCKED_EDGE: createLockedEdge(); break; case MAKE_DIRECTORY_UNLOCK_EDGE: unlockEdge(); break; case MAKE_DIRECTORY_ROLLBACK: rollback(); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } else { switch (env.txnStep) { @@ -318,7 +318,7 @@ struct MakeDirectoryStateMachine { case MAKE_DIRECTORY_CREATE_LOCKED_EDGE: afterCreateLockedEdge(*resp); break; case MAKE_DIRECTORY_UNLOCK_EDGE: afterUnlockEdge(*resp); break; case MAKE_DIRECTORY_ROLLBACK: afterRollback(*resp); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } } @@ -335,16 +335,16 @@ struct MakeDirectoryStateMachine { } void afterLookup(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { lookup(true); // retry - } else if (err == EggsError::DIRECTORY_NOT_FOUND) { + } else if (err == TernError::DIRECTORY_NOT_FOUND) { env.finishWithError(err); - } else if (err == EggsError::NAME_NOT_FOUND) { + } else if (err == TernError::NAME_NOT_FOUND) { // normal case, let's proceed createDirectoryInode(); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); const auto& lookupResp = resp.getLookup(); if (lookupResp.targetId.type() == InodeType::DIRECTORY) { // we're good already @@ -352,7 +352,7 @@ struct MakeDirectoryStateMachine { cdcResp.creationTime = lookupResp.creationTime; cdcResp.id = lookupResp.targetId; } else { - env.finishWithError(EggsError::CANNOT_OVERRIDE_NAME); + env.finishWithError(TernError::CANNOT_OVERRIDE_NAME); } } } @@ -364,12 +364,12 @@ struct MakeDirectoryStateMachine { } void afterCreateDirectoryInode(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { // Try again -- note that the call to create directory inode is idempotent. createDirectoryInode(true); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); lookupOldCreationTime(); } } @@ -384,16 +384,16 @@ struct MakeDirectoryStateMachine { } void afterLookupOldCreationTime(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { lookupOldCreationTime(true); // retry - } else if (err == EggsError::DIRECTORY_NOT_FOUND) { + } else if (err == TernError::DIRECTORY_NOT_FOUND) { // the directory we looked into doesn't even exist anymore -- // we've failed hard and we need to remove the inode. state.setExitError(err); rollback(); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); // there might be no existing edge const auto& fullReadDir = resp.getFullReadDir(); ALWAYS_ASSERT(fullReadDir.results.els.size() < 2); // we have limit=1 @@ -416,15 +416,15 @@ struct MakeDirectoryStateMachine { } void afterCreateLockedEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; if (createCurrentLockedEdgeRetry(err)) { createLockedEdge(true); // try again - } else if (err == EggsError::CANNOT_OVERRIDE_NAME) { + } else if (err == TernError::CANNOT_OVERRIDE_NAME) { // this happens when a file gets created between when we looked // up whether there was something else and now. state.setExitError(err); rollback(); - } else if (err == EggsError::MISMATCHING_CREATION_TIME) { + } else if (err == TernError::MISMATCHING_CREATION_TIME) { // lookup the old creation time again lookupOldCreationTime(); } else { @@ -433,7 +433,7 @@ struct MakeDirectoryStateMachine { // // We also cannot get MISMATCHING_TARGET since we are the only one // creating locked edges, and transactions execute serially. - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); state.setCreationTime(resp.getCreateLockedCurrentEdge().creationTime); unlockEdge(); } @@ -449,12 +449,12 @@ struct MakeDirectoryStateMachine { } void afterUnlockEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT || err == EggsError::MTIME_IS_TOO_RECENT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT || err == TernError::MTIME_IS_TOO_RECENT) { // retry unlockEdge(true); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); // We're done, record the parent relationship and finish { auto k = InodeIdKey::Static(state.dirId()); @@ -477,11 +477,11 @@ struct MakeDirectoryStateMachine { } void afterRollback(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { rollback(true); // retry } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); env.finishWithError(state.exitError()); } } @@ -512,12 +512,12 @@ struct HardUnlinkDirectoryStateMachine { if (unlikely(resp == nullptr)) { // we're resuming with no response switch (env.txnStep) { case HARD_UNLINK_DIRECTORY_REMOVE_INODE: removeInode(); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } else { switch (env.txnStep) { case HARD_UNLINK_DIRECTORY_REMOVE_INODE: afterRemoveInode(*resp); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } } @@ -528,15 +528,15 @@ struct HardUnlinkDirectoryStateMachine { } void afterRemoveInode(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { removeInode(true); // try again } else if ( - err == EggsError::DIRECTORY_NOT_FOUND || err == EggsError::DIRECTORY_HAS_OWNER || err == EggsError::DIRECTORY_NOT_EMPTY + err == TernError::DIRECTORY_NOT_FOUND || err == TernError::DIRECTORY_HAS_OWNER || err == TernError::DIRECTORY_NOT_EMPTY ) { env.finishWithError(err); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); env.finish().setHardUnlinkDirectory(); } } @@ -582,7 +582,7 @@ struct RenameFileStateMachine { case RENAME_FILE_UNLOCK_NEW_EDGE: unlockNewEdge(); break; case RENAME_FILE_UNLOCK_OLD_EDGE: unlockOldEdge(); break; case RENAME_FILE_ROLLBACK: rollback(); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } else { switch (env.txnStep) { @@ -592,7 +592,7 @@ struct RenameFileStateMachine { case RENAME_FILE_UNLOCK_NEW_EDGE: afterUnlockNewEdge(*resp); break; case RENAME_FILE_UNLOCK_OLD_EDGE: afterUnlockOldEdge(*resp); break; case RENAME_FILE_ROLLBACK: afterRollback(*resp); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } } @@ -601,9 +601,9 @@ struct RenameFileStateMachine { // We need this explicit check here because moving directories is more complicated, // and therefore we do it in another transaction type entirely. if (req.targetId.type() == InodeType::DIRECTORY) { - env.finishWithError(EggsError::TYPE_IS_NOT_DIRECTORY); + env.finishWithError(TernError::TYPE_IS_NOT_DIRECTORY); } else if (req.oldOwnerId == req.newOwnerId) { - env.finishWithError(EggsError::SAME_DIRECTORIES); + env.finishWithError(TernError::SAME_DIRECTORIES); } else { lockOldEdge(); } @@ -618,19 +618,19 @@ struct RenameFileStateMachine { } void afterLockOldEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { lockOldEdge(true); // retry } else if ( - err == EggsError::EDGE_NOT_FOUND || err == EggsError::MISMATCHING_CREATION_TIME || err == EggsError::DIRECTORY_NOT_FOUND + err == TernError::EDGE_NOT_FOUND || err == TernError::MISMATCHING_CREATION_TIME || err == TernError::DIRECTORY_NOT_FOUND ) { // We failed hard and we have nothing to roll back - if (err == EggsError::DIRECTORY_NOT_FOUND) { - err = EggsError::OLD_DIRECTORY_NOT_FOUND; + if (err == TernError::DIRECTORY_NOT_FOUND) { + err = TernError::OLD_DIRECTORY_NOT_FOUND; } env.finishWithError(err); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); lookupOldCreationTime(); } } @@ -645,16 +645,16 @@ struct RenameFileStateMachine { } void afterLookupOldCreationTime(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { lookupOldCreationTime(true); // retry - } else if (err == EggsError::DIRECTORY_NOT_FOUND) { + } else if (err == TernError::DIRECTORY_NOT_FOUND) { // we've failed hard and we need to unlock the old edge. - err = EggsError::NEW_DIRECTORY_NOT_FOUND; + err = TernError::NEW_DIRECTORY_NOT_FOUND; state.setExitError(err); rollback(); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); // there might be no existing edge const auto& fullReadDir = resp.getFullReadDir(); ALWAYS_ASSERT(fullReadDir.results.els.size() < 2); // we have limit=1 @@ -677,13 +677,13 @@ struct RenameFileStateMachine { } void afterCreateNewLockedEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; if (createCurrentLockedEdgeRetry(err)) { createNewLockedEdge(true); // retry - } else if (err == EggsError::MISMATCHING_CREATION_TIME) { + } else if (err == TernError::MISMATCHING_CREATION_TIME) { // we need to lookup the creation time again. lookupOldCreationTime(); - } else if (err == EggsError::CANNOT_OVERRIDE_NAME) { + } else if (err == TernError::CANNOT_OVERRIDE_NAME) { // we failed hard and we need to rollback state.setExitError(err); rollback(); @@ -703,11 +703,11 @@ struct RenameFileStateMachine { } void afterUnlockNewEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { unlockNewEdge(true); // retry } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); unlockOldEdge(); } } @@ -723,14 +723,14 @@ struct RenameFileStateMachine { } void afterUnlockOldEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { unlockOldEdge(true); // retry } else { // This can only be because of repeated calls from here: we have the edge locked, // and only the CDC does changes. // TODO it would be cleaner to verify this with a lookup - ALWAYS_ASSERT(err == EggsError::NO_ERROR || err == EggsError::EDGE_NOT_FOUND); + ALWAYS_ASSERT(err == TernError::NO_ERROR || err == TernError::EDGE_NOT_FOUND); // we're finally done auto& resp = env.finish().setRenameFile(); resp.creationTime = state.newCreationTime(); @@ -747,11 +747,11 @@ struct RenameFileStateMachine { } void afterRollback(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { rollback(true); // retry } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); env.finishWithError(state.exitError()); } } @@ -798,7 +798,7 @@ struct SoftUnlinkDirectoryStateMachine { case SOFT_UNLINK_DIRECTORY_REMOVE_OWNER: stat(); break; case SOFT_UNLINK_DIRECTORY_UNLOCK_EDGE: unlockEdge(); break; case SOFT_UNLINK_DIRECTORY_ROLLBACK: rollback(); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } else { switch (env.txnStep) { @@ -807,14 +807,14 @@ struct SoftUnlinkDirectoryStateMachine { case SOFT_UNLINK_DIRECTORY_REMOVE_OWNER: afterRemoveOwner(*resp); break; case SOFT_UNLINK_DIRECTORY_UNLOCK_EDGE: afterUnlockEdge(*resp); break; case SOFT_UNLINK_DIRECTORY_ROLLBACK: afterRollback(*resp); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } } void start() { if (req.targetId.type() != InodeType::DIRECTORY) { - env.finishWithError(EggsError::TYPE_IS_NOT_DIRECTORY); + env.finishWithError(TernError::TYPE_IS_NOT_DIRECTORY); } else { lockEdge(); } @@ -829,14 +829,14 @@ struct SoftUnlinkDirectoryStateMachine { } void afterLockEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { lockEdge(true); - } else if (err == EggsError::MISMATCHING_CREATION_TIME || err == EggsError::EDGE_NOT_FOUND || err == EggsError::DIRECTORY_NOT_FOUND) { + } else if (err == TernError::MISMATCHING_CREATION_TIME || err == TernError::EDGE_NOT_FOUND || err == TernError::DIRECTORY_NOT_FOUND) { LOG_INFO(env.env, "failed locking edge in soft unlink for req: %s with err: %s", req, err); env.finishWithError(err); // no rollback to be done } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); state.setStatDirId(req.targetId); stat(); } @@ -848,11 +848,11 @@ struct SoftUnlinkDirectoryStateMachine { } void afterStat(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { stat(true); // retry } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); const auto& statResp = resp.getStatDirectory(); // insert tags for (const auto& newEntry : statResp.info.entries.els) { @@ -881,15 +881,15 @@ struct SoftUnlinkDirectoryStateMachine { } void afterRemoveOwner(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { // we don't want to keep the dir info around start again from the last stat stat(); - } else if (err == EggsError::DIRECTORY_NOT_EMPTY) { + } else if (err == TernError::DIRECTORY_NOT_EMPTY) { state.setExitError(err); rollback(); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR, "Unexpected error when removing owner, ownerId=%s name=%s creationTime=%s targetId=%s: %s", req.ownerId, GoLangQuotedStringFmt(req.name.ref().data(), req.name.ref().size()), req.creationTime, req.targetId, err); + ALWAYS_ASSERT(err == TernError::NO_ERROR, "Unexpected error when removing owner, ownerId=%s name=%s creationTime=%s targetId=%s: %s", req.ownerId, GoLangQuotedStringFmt(req.name.ref().data(), req.name.ref().size()), req.creationTime, req.targetId, err); unlockEdge(); } } @@ -907,14 +907,14 @@ struct SoftUnlinkDirectoryStateMachine { } void afterUnlockEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { unlockEdge(true); } else { // This can only be because of repeated calls from here: we have the edge locked, // and only the CDC does changes. // TODO it would be cleaner to verify this with a lookup - ALWAYS_ASSERT(err == EggsError::NO_ERROR || err == EggsError::EDGE_NOT_FOUND); + ALWAYS_ASSERT(err == TernError::NO_ERROR || err == TernError::EDGE_NOT_FOUND); auto& cdcResp = env.finish().setSoftUnlinkDirectory(); // Update parent map { @@ -934,14 +934,14 @@ struct SoftUnlinkDirectoryStateMachine { } void afterRollback(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { rollback(true); } else { // This can only be because of repeated calls from here: we have the edge locked, // and only the CDC does changes. // TODO it would be cleaner to verify this with a lookup - ALWAYS_ASSERT(err == EggsError::NO_ERROR || err == EggsError::EDGE_NOT_FOUND); + ALWAYS_ASSERT(err == TernError::NO_ERROR || err == TernError::EDGE_NOT_FOUND); env.finishWithError(state.exitError()); } } @@ -992,7 +992,7 @@ struct RenameDirectoryStateMachine { case RENAME_DIRECTORY_UNLOCK_OLD_EDGE: unlockOldEdge(); break; case RENAME_DIRECTORY_SET_OWNER: setOwner(); break; case RENAME_DIRECTORY_ROLLBACK: rollback(); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } else { switch (env.txnStep) { @@ -1003,7 +1003,7 @@ struct RenameDirectoryStateMachine { case RENAME_DIRECTORY_UNLOCK_OLD_EDGE: afterUnlockOldEdge(*resp); break; case RENAME_DIRECTORY_SET_OWNER: afterSetOwner(*resp); break; case RENAME_DIRECTORY_ROLLBACK: afterRollback(*resp); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } } @@ -1037,12 +1037,12 @@ struct RenameDirectoryStateMachine { void start() { if (req.targetId.type() != InodeType::DIRECTORY) { - env.finishWithError(EggsError::TYPE_IS_NOT_DIRECTORY); + env.finishWithError(TernError::TYPE_IS_NOT_DIRECTORY); } else if (req.oldOwnerId == req.newOwnerId) { - env.finishWithError(EggsError::SAME_DIRECTORIES); + env.finishWithError(TernError::SAME_DIRECTORIES); } else if (!loopCheck()) { // First, check if we'd create a loop - env.finishWithError(EggsError::LOOP_IN_DIRECTORY_RENAME); + env.finishWithError(TernError::LOOP_IN_DIRECTORY_RENAME); } else { // Now, actually start by locking the old edge lockOldEdge(); @@ -1058,18 +1058,18 @@ struct RenameDirectoryStateMachine { } void afterLockOldEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { lockOldEdge(true); // retry } else if ( - err == EggsError::DIRECTORY_NOT_FOUND || err == EggsError::EDGE_NOT_FOUND || err == EggsError::MISMATCHING_CREATION_TIME + err == TernError::DIRECTORY_NOT_FOUND || err == TernError::EDGE_NOT_FOUND || err == TernError::MISMATCHING_CREATION_TIME ) { - if (err == EggsError::DIRECTORY_NOT_FOUND) { - err = EggsError::OLD_DIRECTORY_NOT_FOUND; + if (err == TernError::DIRECTORY_NOT_FOUND) { + err = TernError::OLD_DIRECTORY_NOT_FOUND; } env.finishWithError(err); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); lookupOldCreationTime(); } } @@ -1084,15 +1084,15 @@ struct RenameDirectoryStateMachine { } void afterLookupOldCreationTime(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { lookupOldCreationTime(true); // retry - } else if (err == EggsError::DIRECTORY_NOT_FOUND) { + } else if (err == TernError::DIRECTORY_NOT_FOUND) { // we've failed hard and we need to unlock the old edge. state.setExitError(err); rollback(); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); // there might be no existing edge const auto& fullReadDir = resp.getFullReadDir(); ALWAYS_ASSERT(fullReadDir.results.els.size() < 2); // we have limit=1 @@ -1115,17 +1115,17 @@ struct RenameDirectoryStateMachine { } void afterCreateLockedEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; if (createCurrentLockedEdgeRetry(err)) { createLockedNewEdge(true); - } else if (err == EggsError::MISMATCHING_CREATION_TIME) { + } else if (err == TernError::MISMATCHING_CREATION_TIME) { // we need to lookup the creation time again. lookupOldCreationTime(); - } else if (err == EggsError::CANNOT_OVERRIDE_NAME) { + } else if (err == TernError::CANNOT_OVERRIDE_NAME) { state.setExitError(err); rollback(); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); state.setNewCreationTime(resp.getCreateLockedCurrentEdge().creationTime); unlockNewEdge(); } @@ -1141,16 +1141,16 @@ struct RenameDirectoryStateMachine { } void afterUnlockNewEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { unlockNewEdge(true); - } else if (err == EggsError::EDGE_NOT_FOUND) { + } else if (err == TernError::EDGE_NOT_FOUND) { // This can only be because of repeated calls from here: we have the edge locked, // and only the CDC does changes. // TODO it would be cleaner to verify this with a lookup unlockOldEdge(); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); unlockOldEdge(); } } @@ -1165,16 +1165,16 @@ struct RenameDirectoryStateMachine { } void afterUnlockOldEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { unlockOldEdge(true); - } else if (err == EggsError::EDGE_NOT_FOUND) { + } else if (err == TernError::EDGE_NOT_FOUND) { // This can only be because of repeated calls from here: we have the edge locked, // and only the CDC does changes. // TODO it would be cleaner to verify this with a lookup setOwner(); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); setOwner(); } } @@ -1186,11 +1186,11 @@ struct RenameDirectoryStateMachine { } void afterSetOwner(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { setOwner(true); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); auto& resp = env.finish().setRenameDirectory(); resp.creationTime = state.newCreationTime(); // update cache @@ -1212,8 +1212,8 @@ struct RenameDirectoryStateMachine { } void afterRollback(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { rollback(true); } else { env.finishWithError(state.exitError()); @@ -1250,22 +1250,22 @@ struct CrossShardHardUnlinkFileStateMachine { switch (env.txnStep) { case CROSS_SHARD_HARD_UNLINK_FILE_REMOVE_EDGE: removeEdge(); break; case CROSS_SHARD_HARD_UNLINK_FILE_MAKE_TRANSIENT: makeTransient(); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } else { switch (env.txnStep) { case CROSS_SHARD_HARD_UNLINK_FILE_REMOVE_EDGE: afterRemoveEdge(*resp); break; case CROSS_SHARD_HARD_UNLINK_FILE_MAKE_TRANSIENT: afterMakeTransient(*resp); break; - default: throw EGGS_EXCEPTION("bad step %s", env.txnStep); + default: throw TERN_EXCEPTION("bad step %s", env.txnStep); } } } void start() { if (req.ownerId.shard() == req.targetId.shard()) { - env.finishWithError(EggsError::SAME_SHARD); + env.finishWithError(TernError::SAME_SHARD); } else if (req.targetId.type() == InodeType::DIRECTORY) { - env.finishWithError(EggsError::TYPE_IS_DIRECTORY); + env.finishWithError(TernError::TYPE_IS_DIRECTORY); } else { removeEdge(); } @@ -1280,13 +1280,13 @@ struct CrossShardHardUnlinkFileStateMachine { } void afterRemoveEdge(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT || err == EggsError::MTIME_IS_TOO_RECENT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT || err == TernError::MTIME_IS_TOO_RECENT) { removeEdge(true); - } else if (err == EggsError::DIRECTORY_NOT_FOUND) { + } else if (err == TernError::DIRECTORY_NOT_FOUND) { env.finishWithError(err); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); makeTransient(); } } @@ -1298,11 +1298,11 @@ struct CrossShardHardUnlinkFileStateMachine { } void afterMakeTransient(const ShardRespContainer& resp) { - auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : EggsError::NO_ERROR; - if (err == EggsError::TIMEOUT) { + auto err = resp.kind() == ShardMessageKind::ERROR ? resp.getError() : TernError::NO_ERROR; + if (err == TernError::TIMEOUT) { makeTransient(true); } else { - ALWAYS_ASSERT(err == EggsError::NO_ERROR || err == EggsError::FILE_NOT_FOUND); + ALWAYS_ASSERT(err == TernError::NO_ERROR || err == TernError::FILE_NOT_FOUND); env.finish().setCrossShardHardUnlinkFile(); } } @@ -1770,7 +1770,7 @@ struct CDCDBImpl { CrossShardHardUnlinkFileStateMachine(sm, req.getCrossShardHardUnlinkFile(), state().getCrossShardHardUnlinkFile()).resume(shardResp); break; default: - throw EGGS_EXCEPTION("bad cdc message kind %s", req.kind()); + throw TERN_EXCEPTION("bad cdc message kind %s", req.kind()); } state().setStep(sm.txnStep); diff --git a/cpp/cdc/CDCDBData.hpp b/cpp/cdc/CDCDBData.hpp index 605b5128..bbd642d9 100644 --- a/cpp/cdc/CDCDBData.hpp +++ b/cpp/cdc/CDCDBData.hpp @@ -109,9 +109,9 @@ struct DirsToTxnsKey { struct MakeDirectoryState { FIELDS( LE, InodeId, dirId, setDirId, - LE, EggsTime, oldCreationTime, setOldCreationTime, - LE, EggsTime, creationTime, setCreationTime, - LE, EggsError, exitError, setExitError, // error if we're rolling back + LE, TernTime, oldCreationTime, setOldCreationTime, + LE, TernTime, creationTime, setCreationTime, + LE, TernError, exitError, setExitError, // error if we're rolling back END_STATIC ) @@ -119,49 +119,49 @@ struct MakeDirectoryState { setDirId(NULL_INODE_ID); setOldCreationTime({}); setCreationTime({}); - setExitError(EggsError::NO_ERROR); + setExitError(TernError::NO_ERROR); } }; struct RenameFileState { FIELDS( - LE, EggsTime, newOldCreationTime, setNewOldCreationTime, - LE, EggsTime, newCreationTime, setNewCreationTime, - LE, EggsError, exitError, setExitError, + LE, TernTime, newOldCreationTime, setNewOldCreationTime, + LE, TernTime, newCreationTime, setNewCreationTime, + LE, TernError, exitError, setExitError, END_STATIC ) void start() { setNewOldCreationTime({}); setNewCreationTime({}); - setExitError(EggsError::NO_ERROR); + setExitError(TernError::NO_ERROR); } }; struct SoftUnlinkDirectoryState { FIELDS( LE, InodeId, statDirId, setStatDirId, - LE, EggsError, exitError, setExitError, + LE, TernError, exitError, setExitError, END_STATIC ) void start() { - setExitError(EggsError::NO_ERROR); + setExitError(TernError::NO_ERROR); } }; struct RenameDirectoryState { FIELDS( - LE, EggsTime, newOldCreationTime, setNewOldCreationTime, - LE, EggsTime, newCreationTime, setNewCreationTime, - LE, EggsError, exitError, setExitError, + LE, TernTime, newOldCreationTime, setNewOldCreationTime, + LE, TernTime, newCreationTime, setNewCreationTime, + LE, TernError, exitError, setExitError, END_STATIC ) void start() { setNewOldCreationTime({}); setNewCreationTime({}); - setExitError(EggsError::NO_ERROR); + setExitError(TernError::NO_ERROR); } }; @@ -211,7 +211,7 @@ struct TxnState { case CDCMessageKind::CROSS_SHARD_HARD_UNLINK_FILE: sz += CrossShardHardUnlinkFileState::MAX_SIZE; break; default: - throw EGGS_EXCEPTION("bad cdc message kind %s", reqKind()); + throw TERN_EXCEPTION("bad cdc message kind %s", reqKind()); } return sz; } @@ -256,7 +256,7 @@ struct TxnState { case CDCMessageKind::HARD_UNLINK_DIRECTORY: startHardUnlinkDirectory(); break; case CDCMessageKind::CROSS_SHARD_HARD_UNLINK_FILE: startCrossShardHardUnlinkFile(); break; default: - throw EGGS_EXCEPTION("bad cdc message kind %s", reqKind()); + throw TERN_EXCEPTION("bad cdc message kind %s", reqKind()); } memset(_data+MIN_SIZE, 0, size()-MIN_SIZE); } diff --git a/cpp/cdc/CMakeLists.txt b/cpp/cdc/CMakeLists.txt index 4392e8a0..3cab1393 100644 --- a/cpp/cdc/CMakeLists.txt +++ b/cpp/cdc/CMakeLists.txt @@ -1,8 +1,8 @@ -include_directories(${eggsfs_SOURCE_DIR}/core ${eggsfs_SOURCE_DIR}/shard ${eggsfs_SOURCE_DIR}/wyhash) +include_directories(${ternfs_SOURCE_DIR}/core ${ternfs_SOURCE_DIR}/shard ${ternfs_SOURCE_DIR}/wyhash) add_library(cdc CDC.cpp CDC.hpp CDCDB.cpp CDCDB.hpp CDCDBData.hpp) target_link_libraries(cdc PRIVATE core) -add_executable(eggscdc eggscdc.cpp) -target_link_libraries(eggscdc PRIVATE core shard cdc ${EGGSFS_JEMALLOC_LIBS}) -target_include_directories(eggscdc PRIVATE ${eggsfs_SOURCE_DIR}/wyhash) \ No newline at end of file +add_executable(terncdc terncdc.cpp) +target_link_libraries(terncdc PRIVATE core shard cdc ${TERNFS_JEMALLOC_LIBS}) +target_include_directories(terncdc PRIVATE ${ternfs_SOURCE_DIR}/wyhash) \ No newline at end of file diff --git a/cpp/cdc/eggscdc.cpp b/cpp/cdc/terncdc.cpp similarity index 97% rename from cpp/cdc/eggscdc.cpp rename to cpp/cdc/terncdc.cpp index ae2a1ada..4ae05e11 100644 --- a/cpp/cdc/eggscdc.cpp +++ b/cpp/cdc/terncdc.cpp @@ -16,7 +16,7 @@ void usage(const char* binary) { fprintf(stderr, " -verbose\n"); fprintf(stderr, " Same as '-log-level debug'.\n"); fprintf(stderr, " -shuckle host:port\n"); - fprintf(stderr, " How to reach shuckle, default '%s'\n", defaultShuckleAddress.c_str()); + fprintf(stderr, " How to reach shuckle"); fprintf(stderr, " -addr ipv4 ip:port\n"); fprintf(stderr, " Addresses we bind ourselves too and advertise to shuckle. At least one needs to be provided and at most 2\n"); fprintf(stderr, " -log-file string\n"); @@ -101,7 +101,7 @@ int main(int argc, char** argv) { CDCOptions options; std::vector args; - std::string shuckleAddress = defaultShuckleAddress; + std::string shuckleAddress; uint8_t numAddressesFound = 0; for (int i = 1; i < argc; i++) { const auto getNextArg = [argc, &argv, &dieWithUsage, &i]() { @@ -190,12 +190,17 @@ int main(int argc, char** argv) { dieWithUsage(); } -#ifndef EGGS_DEBUG +#ifndef TERN_DEBUG if (options.logLevel <= LogLevel::LOG_TRACE) { die("Cannot use log level trace trace for non-debug builds (it won't work)."); } #endif + if (shuckleAddress.empty()) { + fprintf(stderr, "Must provide -shuckle."); + dieWithUsage(); + } + if (!parseShuckleAddress(shuckleAddress, options.shuckleHost, options.shucklePort)) { fprintf(stderr, "Bad shuckle address '%s'.\n\n", shuckleAddress.c_str()); dieWithUsage(); diff --git a/cpp/core/Assert.hpp b/cpp/core/Assert.hpp index e967dd88..14e8e426 100644 --- a/cpp/core/Assert.hpp +++ b/cpp/core/Assert.hpp @@ -15,7 +15,7 @@ throw AssertionException(__LINE__, SHORT_FILE, removeTemplates(__PRETTY_FUNCTION__).c_str(), #expr, ## __VA_ARGS__); \ } while (false) -#if defined(EGGS_DEBUG) +#if defined(TERN_DEBUG) #define ASSERT(expr, ...) ALWAYS_ASSERT(expr, ## __VA_ARGS__) diff --git a/cpp/core/AssertiveLock.hpp b/cpp/core/AssertiveLock.hpp index 761102f1..c7ecf165 100644 --- a/cpp/core/AssertiveLock.hpp +++ b/cpp/core/AssertiveLock.hpp @@ -12,7 +12,7 @@ public: AssertiveLocked(std::atomic& held): _held(held) { bool expected = false; if (!_held.compare_exchange_strong(expected, true)) { - throw EGGS_EXCEPTION("could not aquire lock, are you using this function concurrently?"); + throw TERN_EXCEPTION("could not aquire lock, are you using this function concurrently?"); } } diff --git a/cpp/core/CMakeLists.txt b/cpp/core/CMakeLists.txt index 5f58e990..cc50a6a5 100644 --- a/cpp/core/CMakeLists.txt +++ b/cpp/core/CMakeLists.txt @@ -4,4 +4,4 @@ file(GLOB core_headers CONFIGURE_DEPENDS "*.hpp") add_library(core ${core_sources} ${core_headers}) add_dependencies(core thirdparty) target_link_libraries(core PRIVATE rocksdb lz4 zstd uring xxhash rs crc32c) -target_include_directories(core PRIVATE ${eggsfs_SOURCE_DIR}/wyhash) \ No newline at end of file +target_include_directories(core PRIVATE ${ternfs_SOURCE_DIR}/wyhash) \ No newline at end of file diff --git a/cpp/core/Connect.cpp b/cpp/core/Connect.cpp index 13879056..7759bcb6 100644 --- a/cpp/core/Connect.cpp +++ b/cpp/core/Connect.cpp @@ -41,7 +41,7 @@ std::pair connectToHost( if (res == EAI_ADDRFAMILY || res == EAI_AGAIN || res == EAI_NONAME) { // things that might be worth retrying return {Sock::SockError(EIO), explicitGenerateErrString(prefix, res, gai_strerror(res))}; } - throw EGGS_EXCEPTION("%s: %s/%s", prefix, res, gai_strerror(res)); // we're probably hosed + throw TERN_EXCEPTION("%s: %s/%s", prefix, res, gai_strerror(res)); // we're probably hosed } infos.reset(infosRaw); } diff --git a/cpp/core/Crypto.cpp b/cpp/core/Crypto.cpp index 4a125a10..522afd8b 100644 --- a/cpp/core/Crypto.cpp +++ b/cpp/core/Crypto.cpp @@ -18,7 +18,7 @@ void generateSecretKey(std::array& key) { } if (read != key.size()) { // getrandom(2) states that once initialized you can always get up to 256 bytes. - throw EGGS_EXCEPTION("could not read %s random bytes, read %s instead!", key.size(), read); + throw TERN_EXCEPTION("could not read %s random bytes, read %s instead!", key.size(), read); } } diff --git a/cpp/core/Env.cpp b/cpp/core/Env.cpp index f91c176d..ffd4e1de 100644 --- a/cpp/core/Env.cpp +++ b/cpp/core/Env.cpp @@ -20,7 +20,7 @@ std::ostream& operator<<(std::ostream& out, LogLevel ll) { out << "ERROR"; break; default: - throw EGGS_EXCEPTION("bad log level %s", (uint32_t)ll); + throw TERN_EXCEPTION("bad log level %s", (uint32_t)ll); } return out; } @@ -44,7 +44,7 @@ static void loggerSignalHandler(int signal_number) { void installLoggerSignalHandler(void* logger) { Logger* old = nullptr; if (!debuggableLogger.compare_exchange_strong(old, (Logger*)logger)) { - throw EGGS_EXCEPTION("Could not install logger signal handler, some other logger is already here."); + throw TERN_EXCEPTION("Could not install logger signal handler, some other logger is already here."); } struct sigaction act; @@ -63,7 +63,7 @@ void installLoggerSignalHandler(void* logger) { void tearDownLoggerSignalHandler(void* logger) { Logger* old = (Logger*)logger; if (!debuggableLogger.compare_exchange_strong(old, nullptr)) { - throw EGGS_EXCEPTION("Could not tear down logger signal handler, bad preexisting logger"); + throw TERN_EXCEPTION("Could not tear down logger signal handler, bad preexisting logger"); } struct sigaction act; diff --git a/cpp/core/Env.hpp b/cpp/core/Env.hpp index a25d1c8a..497758b5 100644 --- a/cpp/core/Env.hpp +++ b/cpp/core/Env.hpp @@ -70,7 +70,7 @@ public: outSs << "<" << syslogLevel << ">" << prefix << ": " << line << std::endl; } } else { - auto t = eggsNow(); + auto t = ternNow(); while (std::getline(formatSs, line)) { outSs << t << " " << prefix << " [" << level << "] " << line << std::endl; } @@ -149,7 +149,7 @@ public: } }; -#ifdef EGGS_DEBUG +#ifdef TERN_DEBUG #define LOG_TRACE(env, ...) \ do { \ if (unlikely((env)._shouldLog(LogLevel::LOG_TRACE))) { \ diff --git a/cpp/core/ErrorCount.hpp b/cpp/core/ErrorCount.hpp index 610402ea..b046c7e5 100644 --- a/cpp/core/ErrorCount.hpp +++ b/cpp/core/ErrorCount.hpp @@ -10,13 +10,13 @@ struct ErrorCount { std::vector> count; - ErrorCount() : count(maxEggsError) { + ErrorCount() : count(maxTernError) { for (int i = 0; i < count.size(); i++) { count[i].store(0); } } - void add(EggsError err) { + void add(TernError err) { count[(int)err]++; } diff --git a/cpp/core/Exception.cpp b/cpp/core/Exception.cpp index 341e2af9..91e95a21 100644 --- a/cpp/core/Exception.cpp +++ b/cpp/core/Exception.cpp @@ -178,7 +178,7 @@ std::string removeTemplates(const std::string & s) { } -const char *EggsException::what() const throw() { +const char *TernException::what() const throw() { return _msg.c_str(); } diff --git a/cpp/core/Exception.hpp b/cpp/core/Exception.hpp index 207e1851..69ac7bad 100644 --- a/cpp/core/Exception.hpp +++ b/cpp/core/Exception.hpp @@ -7,7 +7,7 @@ #include "FormatTuple.hpp" #include "strerror.h" -#define EGGS_EXCEPTION(...) EggsException(__LINE__, SHORT_FILE, removeTemplates(__PRETTY_FUNCTION__).c_str(), VALIDATE_FORMAT(__VA_ARGS__)) +#define TERN_EXCEPTION(...) TernException(__LINE__, SHORT_FILE, removeTemplates(__PRETTY_FUNCTION__).c_str(), VALIDATE_FORMAT(__VA_ARGS__)) #define SYSCALL_EXCEPTION(...) SyscallException(__LINE__, SHORT_FILE, removeTemplates(__PRETTY_FUNCTION__).c_str(), errno, VALIDATE_FORMAT(__VA_ARGS__)) #define EXPLICIT_SYSCALL_EXCEPTION(rc, ...) SyscallException(__LINE__, SHORT_FILE, removeTemplates(__PRETTY_FUNCTION__).c_str(), rc, VALIDATE_FORMAT(__VA_ARGS__)) #define FATAL_EXCEPTION(...) FatalException(__LINE__, SHORT_FILE, removeTemplates(__PRETTY_FUNCTION__).c_str(), VALIDATE_FORMAT(__VA_ARGS__)) @@ -21,10 +21,10 @@ public: }; -class EggsException : public AbstractException { +class TernException : public AbstractException { public: template - EggsException(int line, const char *file, const char *function, TFmt fmt, Args ... args); + TernException(int line, const char *file, const char *function, TFmt fmt, Args ... args); virtual const char *what() const throw() override; private: @@ -67,10 +67,10 @@ private: }; template -EggsException::EggsException(int line, const char *file, const char *function, TFmt fmt, Args ... args) { +TernException::TernException(int line, const char *file, const char *function, TFmt fmt, Args ... args) { std::stringstream ss; - ss << "EggsException(" << file << "@" << line << " in " << function << "):\n"; + ss << "TernException(" << file << "@" << line << " in " << function << "):\n"; format_pack(ss, fmt, args...); _msg = ss.str(); diff --git a/cpp/core/LogsDB.cpp b/cpp/core/LogsDB.cpp index b4a3bab4..65d16410 100644 --- a/cpp/core/LogsDB.cpp +++ b/cpp/core/LogsDB.cpp @@ -71,11 +71,11 @@ struct LogPartition { std::string name; LogsDBMetadataKey firstWriteKey; rocksdb::ColumnFamilyHandle* cf; - EggsTime firstWriteTime{0}; + TernTime firstWriteTime{0}; LogIdx minKey{0}; LogIdx maxKey{0}; - void reset(rocksdb::ColumnFamilyHandle* cf_, LogIdx minMaxKey, EggsTime firstWriteTime_) { + void reset(rocksdb::ColumnFamilyHandle* cf_, LogIdx minMaxKey, TernTime firstWriteTime_) { cf = cf_; minKey = maxKey = minMaxKey; firstWriteTime = firstWriteTime_; @@ -236,22 +236,22 @@ public: return Iterator(*this); } - EggsError readLogEntry(LogIdx logIdx, LogsDBLogEntry& entry) const { + TernError readLogEntry(LogIdx logIdx, LogsDBLogEntry& entry) const { auto& partition = _getPartitionForIdx(logIdx); if (unlikely(logIdx < partition.minKey)) { - return EggsError::LOG_ENTRY_TRIMMED; + return TernError::LOG_ENTRY_TRIMMED; } auto key = U64Key::Static(logIdx.u64); rocksdb::PinnableSlice value; auto status = _sharedDb.db()->Get({}, partition.cf, key.toSlice(), &value); if (status.IsNotFound()) { - return EggsError::LOG_ENTRY_MISSING; + return TernError::LOG_ENTRY_MISSING; } ROCKS_DB_CHECKED(status); entry.idx = logIdx; entry.value.assign((const uint8_t*)value.data(), (const uint8_t*)value.data() + value.size()); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } void readIndexedEntries(const std::vector& indices, std::vector& entries) const { @@ -263,7 +263,7 @@ public: entries.reserve(indices.size()); for (auto idx : indices) { LogsDBLogEntry& entry = entries.emplace_back(); - if (readLogEntry(idx, entry) != EggsError::NO_ERROR) { + if (readLogEntry(idx, entry) != TernError::NO_ERROR) { entry.idx = 0; } } @@ -308,7 +308,7 @@ public: } private: - void _updatePartitionFirstWriteTime(LogPartition& partition, EggsTime time) { + void _updatePartitionFirstWriteTime(LogPartition& partition, TernTime time) { ROCKS_DB_CHECKED(_sharedDb.db()->Put({}, _sharedDb.getCF(METADATA_CF_NAME), logsDBMetadataKey(partition.firstWriteKey), U64Value::Static(time.ns).toSlice())); partition.firstWriteTime = time; } @@ -327,7 +327,7 @@ private: void _maybeRotate() { auto& partition = _getPartitionForIdx(MAX_LOG_IDX); - if (likely(partition.firstWriteTime == 0 || (partition.firstWriteTime + LogsDB::PARTITION_TIME_SPAN > eggsNow()))) { + if (likely(partition.firstWriteTime == 0 || (partition.firstWriteTime + LogsDB::PARTITION_TIME_SPAN > ternNow()))) { return; } // we only need to drop older partition and reset it's info. @@ -380,7 +380,7 @@ private: void _partitionKeyInserted(LogPartition& partition, LogIdx idx) { if (unlikely(partition.minKey == 0)) { partition.minKey = idx; - _updatePartitionFirstWriteTime(partition, eggsNow()); + _updatePartitionFirstWriteTime(partition, ternNow()); } partition.minKey = std::min(partition.minKey, idx); partition.maxKey = std::max(partition.maxKey, idx); @@ -460,12 +460,12 @@ public: return _leaderToken; } - EggsError updateLeaderToken(LeaderToken token) { + TernError updateLeaderToken(LeaderToken token) { if (unlikely(token < _leaderToken || token < _nomineeToken)) { - return EggsError::LEADER_PREEMPTED; + return TernError::LEADER_PREEMPTED; } if (likely(token == _leaderToken)) { - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } _data.dropEntriesAfterIdx(_lastReleased); ROCKS_DB_CHECKED(_sharedDb.db()->Put({}, _cf, logsDBMetadataKey(LEADER_TOKEN_KEY), U64Value::Static(token.u64).toSlice())); @@ -476,7 +476,7 @@ public: _leaderToken = token; _stats.currentEpoch.store(_leaderToken.idx().u64, std::memory_order_relaxed); _nomineeToken = LeaderToken(0,0); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } LeaderToken getNomineeToken() const { @@ -500,13 +500,13 @@ public: return _lastReleased; } - EggsTime getLastReleasedTime() const { + TernTime getLastReleasedTime() const { return _lastReleasedTime; } void setLastReleased(LogIdx lastReleased) { ALWAYS_ASSERT(_lastReleased <= lastReleased, "Moving release point backwards is not possible. It would cause data inconsistency"); - auto now = eggsNow(); + auto now = ternNow(); rocksdb::WriteBatch batch; batch.Put(_cf, logsDBMetadataKey(LAST_RELEASED_IDX_KEY), U64Value::Static(lastReleased.u64).toSlice()); batch.Put(_cf, logsDBMetadataKey(LAST_RELEASED_TIME_KEY),U64Value::Static(now.ns).toSlice()); @@ -530,7 +530,7 @@ private: LogIdx _lastAssigned; LogIdx _lastReleased; - EggsTime _lastReleasedTime; + TernTime _lastReleasedTime; LeaderToken _leaderToken; LeaderToken _nomineeToken; }; @@ -574,7 +574,7 @@ class ReqResp { } void resendTimedOutRequests() { - auto now = eggsNow(); + auto now = ternNow(); auto defaultCutoffTime = now - LogsDB::RESPONSE_TIMEOUT; auto releaseCutoffTime = now - LogsDB::SEND_RELEASE_INTERVAL; auto readCutoffTime = now - LogsDB::READ_TIMEOUT; @@ -700,7 +700,7 @@ public: _data(data), _reqResp(reqResp), _state(LeadershipState::FOLLOWER), - _leaderLastActive(_noReplication ? 0 :eggsNow()) {} + _leaderLastActive(_noReplication ? 0 :ternNow()) {} bool isLeader() const { return _state == LeadershipState::LEADER; @@ -710,7 +710,7 @@ public: if (unlikely(_avoidBeingLeader)) { return; } - auto now = eggsNow(); + auto now = ternNow(); if (_state != LeadershipState::FOLLOWER || (_leaderLastActive + LogsDB::LEADER_INACTIVE_TIMEOUT > now)) { update_atomic_stat_ema(_stats.leaderLastActive, now - _leaderLastActive); @@ -751,15 +751,15 @@ public: ALWAYS_ASSERT(_state == LeadershipState::BECOMING_NOMINEE, "In state %s Received NEW_LEADER response %s", _state, response); auto& state = *_electionState; ALWAYS_ASSERT(_electionState->requestIds[fromReplicaId.u8] == request.msg.id); - auto result = EggsError(response.result); + auto result = TernError(response.result); switch (result) { - case EggsError::NO_ERROR: + case TernError::NO_ERROR: _electionState->requestIds[request.replicaId.u8] = ReqResp::CONFIRMED_REQ_ID; _electionState->lastReleased = std::max(_electionState->lastReleased, response.lastReleased); _reqResp.eraseRequest(request.msg.id); _tryProgressToDigest(); break; - case EggsError::LEADER_PREEMPTED: + case TernError::LEADER_PREEMPTED: resetLeaderElection(); break; default: @@ -772,15 +772,15 @@ public: ALWAYS_ASSERT(_state == LeadershipState::CONFIRMING_LEADERSHIP, "In state %s Received NEW_LEADER_CONFIRM response %s", _state, response); ALWAYS_ASSERT(_electionState->requestIds[fromReplicaId.u8] == request.msg.id); - auto result = EggsError(response.result); + auto result = TernError(response.result); switch (result) { - case EggsError::NO_ERROR: + case TernError::NO_ERROR: _electionState->requestIds[request.replicaId.u8] = 0; _reqResp.eraseRequest(request.msg.id); LOG_DEBUG(_env,"trying to become leader"); _tryBecomeLeader(); break; - case EggsError::LEADER_PREEMPTED: + case TernError::LEADER_PREEMPTED: resetLeaderElection(); break; default: @@ -792,10 +792,10 @@ public: void proccessRecoveryReadResponse(ReplicaId fromReplicaId, LogsDBRequest& request, const LogRecoveryReadResp& response) { ALWAYS_ASSERT(_state == LeadershipState::DIGESTING_ENTRIES, "In state %s Received LOG_RECOVERY_READ response %s", _state, response); auto& state = *_electionState; - auto result = EggsError(response.result); + auto result = TernError(response.result); switch (result) { - case EggsError::NO_ERROR: - case EggsError::LOG_ENTRY_MISSING: + case TernError::NO_ERROR: + case TernError::LOG_ENTRY_MISSING: { ALWAYS_ASSERT(state.lastReleased < request.msg.body.getLogRecoveryRead().idx); auto entryOffset = request.msg.body.getLogRecoveryRead().idx.u64 - state.lastReleased.u64 - 1; @@ -813,7 +813,7 @@ public: _tryProgressToReplication(); break; } - case EggsError::LEADER_PREEMPTED: + case TernError::LEADER_PREEMPTED: LOG_DEBUG(_env, "Got preempted during recovery by replica %s",fromReplicaId); resetLeaderElection(); break; @@ -826,9 +826,9 @@ public: void proccessRecoveryWriteResponse(ReplicaId fromReplicaId, LogsDBRequest& request, const LogRecoveryWriteResp& response) { ALWAYS_ASSERT(_state == LeadershipState::CONFIRMING_REPLICATION, "In state %s Received LOG_RECOVERY_WRITE response %s", _state, response); auto& state = *_electionState; - auto result = EggsError(response.result); + auto result = TernError(response.result); switch (result) { - case EggsError::NO_ERROR: + case TernError::NO_ERROR: { ALWAYS_ASSERT(state.lastReleased < request.msg.body.getLogRecoveryWrite().idx); auto entryOffset = request.msg.body.getLogRecoveryWrite().idx.u64 - state.lastReleased.u64 - 1; @@ -839,7 +839,7 @@ public: _tryProgressToLeaderConfirm(); break; } - case EggsError::LEADER_PREEMPTED: + case TernError::LEADER_PREEMPTED: resetLeaderElection(); break; default: @@ -857,13 +857,13 @@ public: auto& newLeaderResponse = response.msg.body.setNewLeader(); if (request.nomineeToken.idx() <= _metadata.getLeaderToken().idx() || request.nomineeToken < _metadata.getNomineeToken()) { - newLeaderResponse.result = EggsError::LEADER_PREEMPTED; + newLeaderResponse.result = TernError::LEADER_PREEMPTED; return; } - newLeaderResponse.result = EggsError::NO_ERROR; + newLeaderResponse.result = TernError::NO_ERROR; newLeaderResponse.lastReleased = _metadata.getLastReleased(); - _leaderLastActive = eggsNow(); + _leaderLastActive = ternNow(); if (_metadata.getNomineeToken() == request.nomineeToken) { return; @@ -886,8 +886,8 @@ public: auto err = _metadata.updateLeaderToken(request.nomineeToken); newLeaderConfirmResponse.result = err; - if (err == EggsError::NO_ERROR) { - _leaderLastActive = eggsNow(); + if (err == TernError::NO_ERROR) { + _leaderLastActive = ternNow(); resetLeaderElection(); } } @@ -900,14 +900,14 @@ public: auto& response = _reqResp.newResponse(fromReplicaId, requestId); auto& recoveryReadResponse = response.msg.body.setLogRecoveryRead(); if (request.nomineeToken != _metadata.getNomineeToken()) { - recoveryReadResponse.result = EggsError::LEADER_PREEMPTED; + recoveryReadResponse.result = TernError::LEADER_PREEMPTED; return; } - _leaderLastActive = eggsNow(); + _leaderLastActive = ternNow(); LogsDBLogEntry entry; auto err = _data.readLogEntry(request.idx, entry); recoveryReadResponse.result = err; - if (err == EggsError::NO_ERROR) { + if (err == TernError::NO_ERROR) { recoveryReadResponse.value.els = entry.value; } } @@ -920,20 +920,20 @@ public: auto& response = _reqResp.newResponse(fromReplicaId, requestId); auto& recoveryWriteResponse = response.msg.body.setLogRecoveryWrite(); if (request.nomineeToken != _metadata.getNomineeToken()) { - recoveryWriteResponse.result = EggsError::LEADER_PREEMPTED; + recoveryWriteResponse.result = TernError::LEADER_PREEMPTED; return; } - _leaderLastActive = eggsNow(); + _leaderLastActive = ternNow(); LogsDBLogEntry entry; entry.idx = request.idx; entry.value = request.value.els; _data.writeLogEntry(entry); - recoveryWriteResponse.result = EggsError::NO_ERROR; + recoveryWriteResponse.result = TernError::NO_ERROR; } - EggsError writeLogEntries(LeaderToken token, LogIdx newlastReleased, std::vector& entries) { + TernError writeLogEntries(LeaderToken token, LogIdx newlastReleased, std::vector& entries) { auto err = _metadata.updateLeaderToken(token); - if (err != EggsError::NO_ERROR) { + if (err != TernError::NO_ERROR) { return err; } _clearElectionState(); @@ -941,7 +941,7 @@ public: if (_metadata.getLastReleased() < newlastReleased) { _metadata.setLastReleased(newlastReleased); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } void resetLeaderElection() { @@ -951,7 +951,7 @@ public: LOG_INFO(_env,"Reseting leader election. Becoming follower of leader with token %s", _metadata.getLeaderToken()); } _state = LeadershipState::FOLLOWER; - _leaderLastActive = eggsNow(); + _leaderLastActive = ternNow(); _metadata.setNomineeToken(LeaderToken(0,0)); _clearElectionState(); } @@ -1117,12 +1117,12 @@ private: ALWAYS_ASSERT(nomineeToken.replica() == _replicaId); LOG_INFO(_env,"Became leader with token %s", nomineeToken); _state = LeadershipState::LEADER; - ALWAYS_ASSERT(_metadata.updateLeaderToken(nomineeToken) == EggsError::NO_ERROR); + ALWAYS_ASSERT(_metadata.updateLeaderToken(nomineeToken) == TernError::NO_ERROR); _clearElectionState(); } void _clearElectionState() { - _leaderLastActive = eggsNow(); + _leaderLastActive = ternNow(); if (!_electionState) { return; } @@ -1148,7 +1148,7 @@ private: LeadershipState _state; std::unique_ptr _electionState; - EggsTime _leaderLastActive; + TernTime _leaderLastActive; }; class BatchWriter { @@ -1170,7 +1170,7 @@ public: if (unlikely(writeRequest.token < _token)) { auto& resp = _reqResp.newResponse(request.replicaId, request.msg.id); auto& writeResponse = resp.msg.body.setLogWrite(); - writeResponse.result = EggsError::LEADER_PREEMPTED; + writeResponse.result = TernError::LEADER_PREEMPTED; return; } if (unlikely(_token < writeRequest.token )) { @@ -1296,19 +1296,19 @@ public: auto& response = _reqResp.newResponse(fromReplicaId, requestId); auto& readResponse = response.msg.body.setLogRead(); if (_metadata.getLastReleased() < request.idx) { - readResponse.result = EggsError::LOG_ENTRY_UNRELEASED; + readResponse.result = TernError::LOG_ENTRY_UNRELEASED; return; } LogsDBLogEntry entry; auto err =_data.readLogEntry(request.idx, entry); readResponse.result = err; - if (err == EggsError::NO_ERROR) { + if (err == TernError::NO_ERROR) { readResponse.value.els = entry.value; } } void proccessLogReadResponse(ReplicaId fromReplicaId, LogsDBRequest& request, const LogReadResp& response) { - if (response.result != EggsError::NO_ERROR) { + if (response.result != TernError::NO_ERROR) { return; } @@ -1447,7 +1447,7 @@ public: } auto err = _leaderElection.writeLogEntries(_metadata.getLeaderToken(), newRelease, entriesToWrite); - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); for (auto reqId : _releaseRequests) { if (reqId == 0) { continue; @@ -1460,9 +1460,9 @@ public: } } - EggsError appendEntries(std::vector& entries) { + TernError appendEntries(std::vector& entries) { if (!_leaderElection.isLeader()) { - return EggsError::LEADER_PREEMPTED; + return TernError::LEADER_PREEMPTED; } auto availableSpace = LogsDB::IN_FLIGHT_APPEND_WINDOW - entriesInFlight(); auto countToAppend = std::min(entries.size(), availableSpace); @@ -1493,17 +1493,17 @@ public: entries[i].idx = 0; } _entriesEnd += countToAppend; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } void proccessLogWriteResponse(ReplicaId fromReplicaId, LogsDBRequest& request, const LogWriteResp& response) { if (!_leaderElection.isLeader()) { return; } - switch ((EggsError)response.result) { - case EggsError::NO_ERROR: + switch ((TernError)response.result) { + case TernError::NO_ERROR: break; - case EggsError::LEADER_PREEMPTED: + case TernError::LEADER_PREEMPTED: _leaderElection.resetLeaderElection(); return; default: @@ -1605,7 +1605,7 @@ public: auto initSuccess = _metadata.init(initialStart); initSuccess = _partitions.init(initialStart) && initSuccess; - auto now = eggsNow(); + auto now = ternNow(); if ((now - _metadata.getLastReleasedTime()) * 2 > (LogsDB::PARTITION_TIME_SPAN * 3)) { initSuccess = false; LOG_ERROR(_env,"Time when we last released record (%s) is far in the past. There is a high risk we will not be able to catch up!", _metadata.getLastReleasedTime()); @@ -1616,8 +1616,8 @@ public: _catchupReader.init(); LOG_INFO(_env,"LogsDB opened, leaderToken(%s), lastReleased(%s), lastRead(%s)",_metadata.getLeaderToken(), _metadata.getLastReleased(), _catchupReader.lastRead()); - _infoLoggedTime = eggsNow(); - _lastLoopFinished = eggsNow(); + _infoLoggedTime = ternNow(); + _lastLoopFinished = ternNow(); } ~LogsDBImpl() { @@ -1648,7 +1648,7 @@ public: } void processIncomingMessages(std::vector& requests, std::vector& responses) { - auto processingStarted = eggsNow(); + auto processingStarted = ternNow(); _maybeLogStatus(processingStarted); for(auto& resp : responses) { auto request = _reqResp.getRequest(resp.msg.id); @@ -1742,7 +1742,7 @@ public: responses.clear(); requests.clear(); update_atomic_stat_ema(_stats.idleTime, processingStarted - _lastLoopFinished); - _lastLoopFinished = eggsNow(); + _lastLoopFinished = ternNow(); update_atomic_stat_ema(_stats.processingTime, _lastLoopFinished - processingStarted); } @@ -1755,7 +1755,7 @@ public: return _leaderElection.isLeader(); } - EggsError appendEntries(std::vector& entries) { + TernError appendEntries(std::vector& entries) { return _appender.appendEntries(entries); } @@ -1785,7 +1785,7 @@ public: private: - void _maybeLogStatus(EggsTime now) { + void _maybeLogStatus(TernTime now) { if (now - _infoLoggedTime > 1_mins) { LOG_INFO(_env,"LogsDB status: leaderToken(%s), lastReleased(%s), lastRead(%s)",_metadata.getLeaderToken(), _metadata.getLastReleased(), _catchupReader.lastRead()); _infoLoggedTime = now; @@ -1803,8 +1803,8 @@ private: BatchWriter _batchWriter; CatchupReader _catchupReader; Appender _appender; - EggsTime _infoLoggedTime; - EggsTime _lastLoopFinished; + TernTime _infoLoggedTime; + TernTime _lastLoopFinished; }; LogsDB::LogsDB( @@ -1844,7 +1844,7 @@ bool LogsDB::isLeader() const { return _impl->isLeader(); } -EggsError LogsDB::appendEntries(std::vector& entries) { +TernError LogsDB::appendEntries(std::vector& entries) { return _impl->appendEntries(entries); } diff --git a/cpp/core/LogsDB.hpp b/cpp/core/LogsDB.hpp index 228b9dbf..a15118f7 100644 --- a/cpp/core/LogsDB.hpp +++ b/cpp/core/LogsDB.hpp @@ -52,7 +52,7 @@ std::ostream& operator<<(std::ostream& out, const LogsDBLogEntry& entry); struct LogsDBRequest { ReplicaId replicaId; - EggsTime sentTime; + TernTime sentTime; LogReqMsg msg; }; @@ -122,7 +122,7 @@ public: bool isLeader() const; - EggsError appendEntries(std::vector& entries); + TernError appendEntries(std::vector& entries); // returns index of last entry available for read LogIdx getLastContinuous() const; diff --git a/cpp/core/Loop.cpp b/cpp/core/Loop.cpp index d394a16f..b55aef46 100644 --- a/cpp/core/Loop.cpp +++ b/cpp/core/Loop.cpp @@ -152,7 +152,7 @@ void LoopThread::waitUntilStopped(std::vector>& loop throw EXPLICIT_SYSCALL_EXCEPTION(ret, "pthread_getname_np"); } } - throw EGGS_EXCEPTION("loop %s has not terminated in time, aborting", name); + throw TERN_EXCEPTION("loop %s has not terminated in time, aborting", name); } throw EXPLICIT_SYSCALL_EXCEPTION(ret, "pthread_timedjoin_np"); } diff --git a/cpp/core/Metrics.cpp b/cpp/core/Metrics.cpp index d8f703ad..a55643d1 100644 --- a/cpp/core/Metrics.cpp +++ b/cpp/core/Metrics.cpp @@ -79,7 +79,7 @@ void MetricsBuilder::fieldRaw(const std::string& name, const std::string& value) _state = State::FIELDS; } -void MetricsBuilder::timestamp(EggsTime t) { +void MetricsBuilder::timestamp(TernTime t) { ALWAYS_ASSERT(_state == State::FIELDS); _payload += ' '; static char buf[21]; diff --git a/cpp/core/Metrics.hpp b/cpp/core/Metrics.hpp index 6df1fa6b..eb267b75 100644 --- a/cpp/core/Metrics.hpp +++ b/cpp/core/Metrics.hpp @@ -51,7 +51,7 @@ public: fieldRaw(name, ss.str()); } - void timestamp(EggsTime t); + void timestamp(TernTime t); }; // error string on error diff --git a/cpp/core/MsgsGen.cpp b/cpp/core/MsgsGen.cpp index 3a7a4fda..651b11f2 100644 --- a/cpp/core/MsgsGen.cpp +++ b/cpp/core/MsgsGen.cpp @@ -2,289 +2,289 @@ // Run `go generate ./...` from the go/ directory to regenerate it. #include "MsgsGen.hpp" -std::ostream& operator<<(std::ostream& out, EggsError err) { +std::ostream& operator<<(std::ostream& out, TernError err) { switch (err) { - case EggsError::NO_ERROR: + case TernError::NO_ERROR: out << "NO_ERROR"; break; - case EggsError::INTERNAL_ERROR: + case TernError::INTERNAL_ERROR: out << "INTERNAL_ERROR"; break; - case EggsError::FATAL_ERROR: + case TernError::FATAL_ERROR: out << "FATAL_ERROR"; break; - case EggsError::TIMEOUT: + case TernError::TIMEOUT: out << "TIMEOUT"; break; - case EggsError::MALFORMED_REQUEST: + case TernError::MALFORMED_REQUEST: out << "MALFORMED_REQUEST"; break; - case EggsError::MALFORMED_RESPONSE: + case TernError::MALFORMED_RESPONSE: out << "MALFORMED_RESPONSE"; break; - case EggsError::NOT_AUTHORISED: + case TernError::NOT_AUTHORISED: out << "NOT_AUTHORISED"; break; - case EggsError::UNRECOGNIZED_REQUEST: + case TernError::UNRECOGNIZED_REQUEST: out << "UNRECOGNIZED_REQUEST"; break; - case EggsError::FILE_NOT_FOUND: + case TernError::FILE_NOT_FOUND: out << "FILE_NOT_FOUND"; break; - case EggsError::DIRECTORY_NOT_FOUND: + case TernError::DIRECTORY_NOT_FOUND: out << "DIRECTORY_NOT_FOUND"; break; - case EggsError::NAME_NOT_FOUND: + case TernError::NAME_NOT_FOUND: out << "NAME_NOT_FOUND"; break; - case EggsError::EDGE_NOT_FOUND: + case TernError::EDGE_NOT_FOUND: out << "EDGE_NOT_FOUND"; break; - case EggsError::EDGE_IS_LOCKED: + case TernError::EDGE_IS_LOCKED: out << "EDGE_IS_LOCKED"; break; - case EggsError::TYPE_IS_DIRECTORY: + case TernError::TYPE_IS_DIRECTORY: out << "TYPE_IS_DIRECTORY"; break; - case EggsError::TYPE_IS_NOT_DIRECTORY: + case TernError::TYPE_IS_NOT_DIRECTORY: out << "TYPE_IS_NOT_DIRECTORY"; break; - case EggsError::BAD_COOKIE: + case TernError::BAD_COOKIE: out << "BAD_COOKIE"; break; - case EggsError::INCONSISTENT_STORAGE_CLASS_PARITY: + case TernError::INCONSISTENT_STORAGE_CLASS_PARITY: out << "INCONSISTENT_STORAGE_CLASS_PARITY"; break; - case EggsError::LAST_SPAN_STATE_NOT_CLEAN: + case TernError::LAST_SPAN_STATE_NOT_CLEAN: out << "LAST_SPAN_STATE_NOT_CLEAN"; break; - case EggsError::COULD_NOT_PICK_BLOCK_SERVICES: + case TernError::COULD_NOT_PICK_BLOCK_SERVICES: out << "COULD_NOT_PICK_BLOCK_SERVICES"; break; - case EggsError::BAD_SPAN_BODY: + case TernError::BAD_SPAN_BODY: out << "BAD_SPAN_BODY"; break; - case EggsError::SPAN_NOT_FOUND: + case TernError::SPAN_NOT_FOUND: out << "SPAN_NOT_FOUND"; break; - case EggsError::BLOCK_SERVICE_NOT_FOUND: + case TernError::BLOCK_SERVICE_NOT_FOUND: out << "BLOCK_SERVICE_NOT_FOUND"; break; - case EggsError::CANNOT_CERTIFY_BLOCKLESS_SPAN: + case TernError::CANNOT_CERTIFY_BLOCKLESS_SPAN: out << "CANNOT_CERTIFY_BLOCKLESS_SPAN"; break; - case EggsError::BAD_NUMBER_OF_BLOCKS_PROOFS: + case TernError::BAD_NUMBER_OF_BLOCKS_PROOFS: out << "BAD_NUMBER_OF_BLOCKS_PROOFS"; break; - case EggsError::BAD_BLOCK_PROOF: + case TernError::BAD_BLOCK_PROOF: out << "BAD_BLOCK_PROOF"; break; - case EggsError::CANNOT_OVERRIDE_NAME: + case TernError::CANNOT_OVERRIDE_NAME: out << "CANNOT_OVERRIDE_NAME"; break; - case EggsError::NAME_IS_LOCKED: + case TernError::NAME_IS_LOCKED: out << "NAME_IS_LOCKED"; break; - case EggsError::MTIME_IS_TOO_RECENT: + case TernError::MTIME_IS_TOO_RECENT: out << "MTIME_IS_TOO_RECENT"; break; - case EggsError::MISMATCHING_TARGET: + case TernError::MISMATCHING_TARGET: out << "MISMATCHING_TARGET"; break; - case EggsError::MISMATCHING_OWNER: + case TernError::MISMATCHING_OWNER: out << "MISMATCHING_OWNER"; break; - case EggsError::MISMATCHING_CREATION_TIME: + case TernError::MISMATCHING_CREATION_TIME: out << "MISMATCHING_CREATION_TIME"; break; - case EggsError::DIRECTORY_NOT_EMPTY: + case TernError::DIRECTORY_NOT_EMPTY: out << "DIRECTORY_NOT_EMPTY"; break; - case EggsError::FILE_IS_TRANSIENT: + case TernError::FILE_IS_TRANSIENT: out << "FILE_IS_TRANSIENT"; break; - case EggsError::OLD_DIRECTORY_NOT_FOUND: + case TernError::OLD_DIRECTORY_NOT_FOUND: out << "OLD_DIRECTORY_NOT_FOUND"; break; - case EggsError::NEW_DIRECTORY_NOT_FOUND: + case TernError::NEW_DIRECTORY_NOT_FOUND: out << "NEW_DIRECTORY_NOT_FOUND"; break; - case EggsError::LOOP_IN_DIRECTORY_RENAME: + case TernError::LOOP_IN_DIRECTORY_RENAME: out << "LOOP_IN_DIRECTORY_RENAME"; break; - case EggsError::DIRECTORY_HAS_OWNER: + case TernError::DIRECTORY_HAS_OWNER: out << "DIRECTORY_HAS_OWNER"; break; - case EggsError::FILE_IS_NOT_TRANSIENT: + case TernError::FILE_IS_NOT_TRANSIENT: out << "FILE_IS_NOT_TRANSIENT"; break; - case EggsError::FILE_NOT_EMPTY: + case TernError::FILE_NOT_EMPTY: out << "FILE_NOT_EMPTY"; break; - case EggsError::CANNOT_REMOVE_ROOT_DIRECTORY: + case TernError::CANNOT_REMOVE_ROOT_DIRECTORY: out << "CANNOT_REMOVE_ROOT_DIRECTORY"; break; - case EggsError::FILE_EMPTY: + case TernError::FILE_EMPTY: out << "FILE_EMPTY"; break; - case EggsError::CANNOT_REMOVE_DIRTY_SPAN: + case TernError::CANNOT_REMOVE_DIRTY_SPAN: out << "CANNOT_REMOVE_DIRTY_SPAN"; break; - case EggsError::BAD_SHARD: + case TernError::BAD_SHARD: out << "BAD_SHARD"; break; - case EggsError::BAD_NAME: + case TernError::BAD_NAME: out << "BAD_NAME"; break; - case EggsError::MORE_RECENT_SNAPSHOT_EDGE: + case TernError::MORE_RECENT_SNAPSHOT_EDGE: out << "MORE_RECENT_SNAPSHOT_EDGE"; break; - case EggsError::MORE_RECENT_CURRENT_EDGE: + case TernError::MORE_RECENT_CURRENT_EDGE: out << "MORE_RECENT_CURRENT_EDGE"; break; - case EggsError::BAD_DIRECTORY_INFO: + case TernError::BAD_DIRECTORY_INFO: out << "BAD_DIRECTORY_INFO"; break; - case EggsError::DEADLINE_NOT_PASSED: + case TernError::DEADLINE_NOT_PASSED: out << "DEADLINE_NOT_PASSED"; break; - case EggsError::SAME_SOURCE_AND_DESTINATION: + case TernError::SAME_SOURCE_AND_DESTINATION: out << "SAME_SOURCE_AND_DESTINATION"; break; - case EggsError::SAME_DIRECTORIES: + case TernError::SAME_DIRECTORIES: out << "SAME_DIRECTORIES"; break; - case EggsError::SAME_SHARD: + case TernError::SAME_SHARD: out << "SAME_SHARD"; break; - case EggsError::BAD_PROTOCOL_VERSION: + case TernError::BAD_PROTOCOL_VERSION: out << "BAD_PROTOCOL_VERSION"; break; - case EggsError::BAD_CERTIFICATE: + case TernError::BAD_CERTIFICATE: out << "BAD_CERTIFICATE"; break; - case EggsError::BLOCK_TOO_RECENT_FOR_DELETION: + case TernError::BLOCK_TOO_RECENT_FOR_DELETION: out << "BLOCK_TOO_RECENT_FOR_DELETION"; break; - case EggsError::BLOCK_FETCH_OUT_OF_BOUNDS: + case TernError::BLOCK_FETCH_OUT_OF_BOUNDS: out << "BLOCK_FETCH_OUT_OF_BOUNDS"; break; - case EggsError::BAD_BLOCK_CRC: + case TernError::BAD_BLOCK_CRC: out << "BAD_BLOCK_CRC"; break; - case EggsError::BLOCK_TOO_BIG: + case TernError::BLOCK_TOO_BIG: out << "BLOCK_TOO_BIG"; break; - case EggsError::BLOCK_NOT_FOUND: + case TernError::BLOCK_NOT_FOUND: out << "BLOCK_NOT_FOUND"; break; - case EggsError::CANNOT_UNSET_DECOMMISSIONED: + case TernError::CANNOT_UNSET_DECOMMISSIONED: out << "CANNOT_UNSET_DECOMMISSIONED"; break; - case EggsError::CANNOT_REGISTER_DECOMMISSIONED_OR_STALE: + case TernError::CANNOT_REGISTER_DECOMMISSIONED_OR_STALE: out << "CANNOT_REGISTER_DECOMMISSIONED_OR_STALE"; break; - case EggsError::BLOCK_TOO_OLD_FOR_WRITE: + case TernError::BLOCK_TOO_OLD_FOR_WRITE: out << "BLOCK_TOO_OLD_FOR_WRITE"; break; - case EggsError::BLOCK_IO_ERROR_DEVICE: + case TernError::BLOCK_IO_ERROR_DEVICE: out << "BLOCK_IO_ERROR_DEVICE"; break; - case EggsError::BLOCK_IO_ERROR_FILE: + case TernError::BLOCK_IO_ERROR_FILE: out << "BLOCK_IO_ERROR_FILE"; break; - case EggsError::INVALID_REPLICA: + case TernError::INVALID_REPLICA: out << "INVALID_REPLICA"; break; - case EggsError::DIFFERENT_ADDRS_INFO: + case TernError::DIFFERENT_ADDRS_INFO: out << "DIFFERENT_ADDRS_INFO"; break; - case EggsError::LEADER_PREEMPTED: + case TernError::LEADER_PREEMPTED: out << "LEADER_PREEMPTED"; break; - case EggsError::LOG_ENTRY_MISSING: + case TernError::LOG_ENTRY_MISSING: out << "LOG_ENTRY_MISSING"; break; - case EggsError::LOG_ENTRY_TRIMMED: + case TernError::LOG_ENTRY_TRIMMED: out << "LOG_ENTRY_TRIMMED"; break; - case EggsError::LOG_ENTRY_UNRELEASED: + case TernError::LOG_ENTRY_UNRELEASED: out << "LOG_ENTRY_UNRELEASED"; break; - case EggsError::LOG_ENTRY_RELEASED: + case TernError::LOG_ENTRY_RELEASED: out << "LOG_ENTRY_RELEASED"; break; - case EggsError::AUTO_DECOMMISSION_FORBIDDEN: + case TernError::AUTO_DECOMMISSION_FORBIDDEN: out << "AUTO_DECOMMISSION_FORBIDDEN"; break; - case EggsError::INCONSISTENT_BLOCK_SERVICE_REGISTRATION: + case TernError::INCONSISTENT_BLOCK_SERVICE_REGISTRATION: out << "INCONSISTENT_BLOCK_SERVICE_REGISTRATION"; break; - case EggsError::SWAP_BLOCKS_INLINE_STORAGE: + case TernError::SWAP_BLOCKS_INLINE_STORAGE: out << "SWAP_BLOCKS_INLINE_STORAGE"; break; - case EggsError::SWAP_BLOCKS_MISMATCHING_SIZE: + case TernError::SWAP_BLOCKS_MISMATCHING_SIZE: out << "SWAP_BLOCKS_MISMATCHING_SIZE"; break; - case EggsError::SWAP_BLOCKS_MISMATCHING_STATE: + case TernError::SWAP_BLOCKS_MISMATCHING_STATE: out << "SWAP_BLOCKS_MISMATCHING_STATE"; break; - case EggsError::SWAP_BLOCKS_MISMATCHING_CRC: + case TernError::SWAP_BLOCKS_MISMATCHING_CRC: out << "SWAP_BLOCKS_MISMATCHING_CRC"; break; - case EggsError::SWAP_BLOCKS_DUPLICATE_BLOCK_SERVICE: + case TernError::SWAP_BLOCKS_DUPLICATE_BLOCK_SERVICE: out << "SWAP_BLOCKS_DUPLICATE_BLOCK_SERVICE"; break; - case EggsError::SWAP_SPANS_INLINE_STORAGE: + case TernError::SWAP_SPANS_INLINE_STORAGE: out << "SWAP_SPANS_INLINE_STORAGE"; break; - case EggsError::SWAP_SPANS_MISMATCHING_SIZE: + case TernError::SWAP_SPANS_MISMATCHING_SIZE: out << "SWAP_SPANS_MISMATCHING_SIZE"; break; - case EggsError::SWAP_SPANS_NOT_CLEAN: + case TernError::SWAP_SPANS_NOT_CLEAN: out << "SWAP_SPANS_NOT_CLEAN"; break; - case EggsError::SWAP_SPANS_MISMATCHING_CRC: + case TernError::SWAP_SPANS_MISMATCHING_CRC: out << "SWAP_SPANS_MISMATCHING_CRC"; break; - case EggsError::SWAP_SPANS_MISMATCHING_BLOCKS: + case TernError::SWAP_SPANS_MISMATCHING_BLOCKS: out << "SWAP_SPANS_MISMATCHING_BLOCKS"; break; - case EggsError::EDGE_NOT_OWNED: + case TernError::EDGE_NOT_OWNED: out << "EDGE_NOT_OWNED"; break; - case EggsError::CANNOT_CREATE_DB_SNAPSHOT: + case TernError::CANNOT_CREATE_DB_SNAPSHOT: out << "CANNOT_CREATE_DB_SNAPSHOT"; break; - case EggsError::BLOCK_SIZE_NOT_MULTIPLE_OF_PAGE_SIZE: + case TernError::BLOCK_SIZE_NOT_MULTIPLE_OF_PAGE_SIZE: out << "BLOCK_SIZE_NOT_MULTIPLE_OF_PAGE_SIZE"; break; - case EggsError::SWAP_BLOCKS_DUPLICATE_FAILURE_DOMAIN: + case TernError::SWAP_BLOCKS_DUPLICATE_FAILURE_DOMAIN: out << "SWAP_BLOCKS_DUPLICATE_FAILURE_DOMAIN"; break; - case EggsError::TRANSIENT_LOCATION_COUNT: + case TernError::TRANSIENT_LOCATION_COUNT: out << "TRANSIENT_LOCATION_COUNT"; break; - case EggsError::ADD_SPAN_LOCATION_INLINE_STORAGE: + case TernError::ADD_SPAN_LOCATION_INLINE_STORAGE: out << "ADD_SPAN_LOCATION_INLINE_STORAGE"; break; - case EggsError::ADD_SPAN_LOCATION_MISMATCHING_SIZE: + case TernError::ADD_SPAN_LOCATION_MISMATCHING_SIZE: out << "ADD_SPAN_LOCATION_MISMATCHING_SIZE"; break; - case EggsError::ADD_SPAN_LOCATION_NOT_CLEAN: + case TernError::ADD_SPAN_LOCATION_NOT_CLEAN: out << "ADD_SPAN_LOCATION_NOT_CLEAN"; break; - case EggsError::ADD_SPAN_LOCATION_MISMATCHING_CRC: + case TernError::ADD_SPAN_LOCATION_MISMATCHING_CRC: out << "ADD_SPAN_LOCATION_MISMATCHING_CRC"; break; - case EggsError::ADD_SPAN_LOCATION_EXISTS: + case TernError::ADD_SPAN_LOCATION_EXISTS: out << "ADD_SPAN_LOCATION_EXISTS"; break; - case EggsError::SWAP_BLOCKS_MISMATCHING_LOCATION: + case TernError::SWAP_BLOCKS_MISMATCHING_LOCATION: out << "SWAP_BLOCKS_MISMATCHING_LOCATION"; break; default: - out << "EggsError(" << ((int)err) << ")"; + out << "TernError(" << ((int)err) << ")"; break; } return out; @@ -715,13 +715,13 @@ void CurrentEdge::clear() { targetId = InodeId(); nameHash = uint64_t(0); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool CurrentEdge::operator==(const CurrentEdge& rhs) const { if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if ((uint64_t)this->nameHash != (uint64_t)rhs.nameHash) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const CurrentEdge& x) { @@ -859,11 +859,11 @@ void ShardInfo::unpack(BincodeBuf& buf) { } void ShardInfo::clear() { addrs.clear(); - lastSeen = EggsTime(); + lastSeen = TernTime(); } bool ShardInfo::operator==(const ShardInfo& rhs) const { if (addrs != rhs.addrs) { return false; }; - if ((EggsTime)this->lastSeen != (EggsTime)rhs.lastSeen) { return false; }; + if ((TernTime)this->lastSeen != (TernTime)rhs.lastSeen) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const ShardInfo& x) { @@ -1172,14 +1172,14 @@ void Edge::clear() { targetId = InodeIdExtra(); nameHash = uint64_t(0); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool Edge::operator==(const Edge& rhs) const { if ((bool)this->current != (bool)rhs.current) { return false; }; if ((InodeIdExtra)this->targetId != (InodeIdExtra)rhs.targetId) { return false; }; if ((uint64_t)this->nameHash != (uint64_t)rhs.nameHash) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const Edge& x) { @@ -1200,12 +1200,12 @@ void FullReadDirCursor::unpack(BincodeBuf& buf) { void FullReadDirCursor::clear() { current = bool(0); startName.clear(); - startTime = EggsTime(); + startTime = TernTime(); } bool FullReadDirCursor::operator==(const FullReadDirCursor& rhs) const { if ((bool)this->current != (bool)rhs.current) { return false; }; if (startName != rhs.startName) { return false; }; - if ((EggsTime)this->startTime != (EggsTime)rhs.startTime) { return false; }; + if ((TernTime)this->startTime != (TernTime)rhs.startTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const FullReadDirCursor& x) { @@ -1226,12 +1226,12 @@ void TransientFile::unpack(BincodeBuf& buf) { void TransientFile::clear() { id = InodeId(); cookie.clear(); - deadlineTime = EggsTime(); + deadlineTime = TernTime(); } bool TransientFile::operator==(const TransientFile& rhs) const { if ((InodeId)this->id != (InodeId)rhs.id) { return false; }; if (cookie != rhs.cookie) { return false; }; - if ((EggsTime)this->deadlineTime != (EggsTime)rhs.deadlineTime) { return false; }; + if ((TernTime)this->deadlineTime != (TernTime)rhs.deadlineTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const TransientFile& x) { @@ -1302,9 +1302,9 @@ void BlockServiceDeprecatedInfo::clear() { availableBytes = uint64_t(0); blocks = uint64_t(0); path.clear(); - lastSeen = EggsTime(); + lastSeen = TernTime(); hasFiles = bool(0); - flagsLastChanged = EggsTime(); + flagsLastChanged = TernTime(); } bool BlockServiceDeprecatedInfo::operator==(const BlockServiceDeprecatedInfo& rhs) const { if ((BlockServiceId)this->id != (BlockServiceId)rhs.id) { return false; }; @@ -1317,9 +1317,9 @@ bool BlockServiceDeprecatedInfo::operator==(const BlockServiceDeprecatedInfo& rh if ((uint64_t)this->availableBytes != (uint64_t)rhs.availableBytes) { return false; }; if ((uint64_t)this->blocks != (uint64_t)rhs.blocks) { return false; }; if (path != rhs.path) { return false; }; - if ((EggsTime)this->lastSeen != (EggsTime)rhs.lastSeen) { return false; }; + if ((TernTime)this->lastSeen != (TernTime)rhs.lastSeen) { return false; }; if ((bool)this->hasFiles != (bool)rhs.hasFiles) { return false; }; - if ((EggsTime)this->flagsLastChanged != (EggsTime)rhs.flagsLastChanged) { return false; }; + if ((TernTime)this->flagsLastChanged != (TernTime)rhs.flagsLastChanged) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const BlockServiceDeprecatedInfo& x) { @@ -1433,14 +1433,14 @@ void FullShardInfo::clear() { id = ShardReplicaId(); isLeader = bool(0); addrs.clear(); - lastSeen = EggsTime(); + lastSeen = TernTime(); locationId = uint8_t(0); } bool FullShardInfo::operator==(const FullShardInfo& rhs) const { if ((ShardReplicaId)this->id != (ShardReplicaId)rhs.id) { return false; }; if ((bool)this->isLeader != (bool)rhs.isLeader) { return false; }; if (addrs != rhs.addrs) { return false; }; - if ((EggsTime)this->lastSeen != (EggsTime)rhs.lastSeen) { return false; }; + if ((TernTime)this->lastSeen != (TernTime)rhs.lastSeen) { return false; }; if ((uint8_t)this->locationId != (uint8_t)rhs.locationId) { return false; }; return true; } @@ -1530,14 +1530,14 @@ void CdcInfo::clear() { locationId = uint8_t(0); isLeader = bool(0); addrs.clear(); - lastSeen = EggsTime(); + lastSeen = TernTime(); } bool CdcInfo::operator==(const CdcInfo& rhs) const { if ((ReplicaId)this->replicaId != (ReplicaId)rhs.replicaId) { return false; }; if ((uint8_t)this->locationId != (uint8_t)rhs.locationId) { return false; }; if ((bool)this->isLeader != (bool)rhs.isLeader) { return false; }; if (addrs != rhs.addrs) { return false; }; - if ((EggsTime)this->lastSeen != (EggsTime)rhs.lastSeen) { return false; }; + if ((TernTime)this->lastSeen != (TernTime)rhs.lastSeen) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const CdcInfo& x) { @@ -1599,11 +1599,11 @@ void LookupResp::unpack(BincodeBuf& buf) { } void LookupResp::clear() { targetId = InodeId(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool LookupResp::operator==(const LookupResp& rhs) const { if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const LookupResp& x) { @@ -1640,13 +1640,13 @@ void StatFileResp::unpack(BincodeBuf& buf) { size = buf.unpackScalar(); } void StatFileResp::clear() { - mtime = EggsTime(); - atime = EggsTime(); + mtime = TernTime(); + atime = TernTime(); size = uint64_t(0); } bool StatFileResp::operator==(const StatFileResp& rhs) const { - if ((EggsTime)this->mtime != (EggsTime)rhs.mtime) { return false; }; - if ((EggsTime)this->atime != (EggsTime)rhs.atime) { return false; }; + if ((TernTime)this->mtime != (TernTime)rhs.mtime) { return false; }; + if ((TernTime)this->atime != (TernTime)rhs.atime) { return false; }; if ((uint64_t)this->size != (uint64_t)rhs.size) { return false; }; return true; } @@ -1684,12 +1684,12 @@ void StatDirectoryResp::unpack(BincodeBuf& buf) { info.unpack(buf); } void StatDirectoryResp::clear() { - mtime = EggsTime(); + mtime = TernTime(); owner = InodeId(); info.clear(); } bool StatDirectoryResp::operator==(const StatDirectoryResp& rhs) const { - if ((EggsTime)this->mtime != (EggsTime)rhs.mtime) { return false; }; + if ((TernTime)this->mtime != (TernTime)rhs.mtime) { return false; }; if ((InodeId)this->owner != (InodeId)rhs.owner) { return false; }; if (info != rhs.info) { return false; }; return true; @@ -1948,10 +1948,10 @@ void LinkFileResp::unpack(BincodeBuf& buf) { creationTime.unpack(buf); } void LinkFileResp::clear() { - creationTime = EggsTime(); + creationTime = TernTime(); } bool LinkFileResp::operator==(const LinkFileResp& rhs) const { - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const LinkFileResp& x) { @@ -1975,13 +1975,13 @@ void SoftUnlinkFileReq::clear() { ownerId = InodeId(); fileId = InodeId(); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool SoftUnlinkFileReq::operator==(const SoftUnlinkFileReq& rhs) const { if ((InodeId)this->ownerId != (InodeId)rhs.ownerId) { return false; }; if ((InodeId)this->fileId != (InodeId)rhs.fileId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const SoftUnlinkFileReq& x) { @@ -1996,10 +1996,10 @@ void SoftUnlinkFileResp::unpack(BincodeBuf& buf) { deleteCreationTime.unpack(buf); } void SoftUnlinkFileResp::clear() { - deleteCreationTime = EggsTime(); + deleteCreationTime = TernTime(); } bool SoftUnlinkFileResp::operator==(const SoftUnlinkFileResp& rhs) const { - if ((EggsTime)this->deleteCreationTime != (EggsTime)rhs.deleteCreationTime) { return false; }; + if ((TernTime)this->deleteCreationTime != (TernTime)rhs.deleteCreationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const SoftUnlinkFileResp& x) { @@ -2081,14 +2081,14 @@ void SameDirectoryRenameReq::clear() { targetId = InodeId(); dirId = InodeId(); oldName.clear(); - oldCreationTime = EggsTime(); + oldCreationTime = TernTime(); newName.clear(); } bool SameDirectoryRenameReq::operator==(const SameDirectoryRenameReq& rhs) const { if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if (oldName != rhs.oldName) { return false; }; - if ((EggsTime)this->oldCreationTime != (EggsTime)rhs.oldCreationTime) { return false; }; + if ((TernTime)this->oldCreationTime != (TernTime)rhs.oldCreationTime) { return false; }; if (newName != rhs.newName) { return false; }; return true; } @@ -2104,10 +2104,10 @@ void SameDirectoryRenameResp::unpack(BincodeBuf& buf) { newCreationTime.unpack(buf); } void SameDirectoryRenameResp::clear() { - newCreationTime = EggsTime(); + newCreationTime = TernTime(); } bool SameDirectoryRenameResp::operator==(const SameDirectoryRenameResp& rhs) const { - if ((EggsTime)this->newCreationTime != (EggsTime)rhs.newCreationTime) { return false; }; + if ((TernTime)this->newCreationTime != (TernTime)rhs.newCreationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const SameDirectoryRenameResp& x) { @@ -2231,7 +2231,7 @@ void FullReadDirReq::clear() { dirId = InodeId(); flags = uint8_t(0); startName.clear(); - startTime = EggsTime(); + startTime = TernTime(); limit = uint16_t(0); mtu = uint16_t(0); } @@ -2239,7 +2239,7 @@ bool FullReadDirReq::operator==(const FullReadDirReq& rhs) const { if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if ((uint8_t)this->flags != (uint8_t)rhs.flags) { return false; }; if (startName != rhs.startName) { return false; }; - if ((EggsTime)this->startTime != (EggsTime)rhs.startTime) { return false; }; + if ((TernTime)this->startTime != (TernTime)rhs.startTime) { return false; }; if ((uint16_t)this->limit != (uint16_t)rhs.limit) { return false; }; if ((uint16_t)this->mtu != (uint16_t)rhs.mtu) { return false; }; return true; @@ -2343,13 +2343,13 @@ void RemoveNonOwnedEdgeReq::clear() { dirId = InodeId(); targetId = InodeId(); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool RemoveNonOwnedEdgeReq::operator==(const RemoveNonOwnedEdgeReq& rhs) const { if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const RemoveNonOwnedEdgeReq& x) { @@ -2387,13 +2387,13 @@ void SameShardHardFileUnlinkReq::clear() { ownerId = InodeId(); targetId = InodeId(); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool SameShardHardFileUnlinkReq::operator==(const SameShardHardFileUnlinkReq& rhs) const { if ((InodeId)this->ownerId != (InodeId)rhs.ownerId) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const SameShardHardFileUnlinkReq& x) { @@ -2444,12 +2444,12 @@ void StatTransientFileResp::unpack(BincodeBuf& buf) { buf.unpackBytes(note); } void StatTransientFileResp::clear() { - mtime = EggsTime(); + mtime = TernTime(); size = uint64_t(0); note.clear(); } bool StatTransientFileResp::operator==(const StatTransientFileResp& rhs) const { - if ((EggsTime)this->mtime != (EggsTime)rhs.mtime) { return false; }; + if ((TernTime)this->mtime != (TernTime)rhs.mtime) { return false; }; if ((uint64_t)this->size != (uint64_t)rhs.size) { return false; }; if (note != rhs.note) { return false; }; return true; @@ -3169,14 +3169,14 @@ void SameDirectoryRenameSnapshotReq::clear() { targetId = InodeId(); dirId = InodeId(); oldName.clear(); - oldCreationTime = EggsTime(); + oldCreationTime = TernTime(); newName.clear(); } bool SameDirectoryRenameSnapshotReq::operator==(const SameDirectoryRenameSnapshotReq& rhs) const { if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if (oldName != rhs.oldName) { return false; }; - if ((EggsTime)this->oldCreationTime != (EggsTime)rhs.oldCreationTime) { return false; }; + if ((TernTime)this->oldCreationTime != (TernTime)rhs.oldCreationTime) { return false; }; if (newName != rhs.newName) { return false; }; return true; } @@ -3192,10 +3192,10 @@ void SameDirectoryRenameSnapshotResp::unpack(BincodeBuf& buf) { newCreationTime.unpack(buf); } void SameDirectoryRenameSnapshotResp::clear() { - newCreationTime = EggsTime(); + newCreationTime = TernTime(); } bool SameDirectoryRenameSnapshotResp::operator==(const SameDirectoryRenameSnapshotResp& rhs) const { - if ((EggsTime)this->newCreationTime != (EggsTime)rhs.newCreationTime) { return false; }; + if ((TernTime)this->newCreationTime != (TernTime)rhs.newCreationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const SameDirectoryRenameSnapshotResp& x) { @@ -3276,10 +3276,10 @@ void CreateDirectoryInodeResp::unpack(BincodeBuf& buf) { mtime.unpack(buf); } void CreateDirectoryInodeResp::clear() { - mtime = EggsTime(); + mtime = TernTime(); } bool CreateDirectoryInodeResp::operator==(const CreateDirectoryInodeResp& rhs) const { - if ((EggsTime)this->mtime != (EggsTime)rhs.mtime) { return false; }; + if ((TernTime)this->mtime != (TernTime)rhs.mtime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const CreateDirectoryInodeResp& x) { @@ -3375,13 +3375,13 @@ void CreateLockedCurrentEdgeReq::clear() { dirId = InodeId(); name.clear(); targetId = InodeId(); - oldCreationTime = EggsTime(); + oldCreationTime = TernTime(); } bool CreateLockedCurrentEdgeReq::operator==(const CreateLockedCurrentEdgeReq& rhs) const { if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if (name != rhs.name) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; - if ((EggsTime)this->oldCreationTime != (EggsTime)rhs.oldCreationTime) { return false; }; + if ((TernTime)this->oldCreationTime != (TernTime)rhs.oldCreationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const CreateLockedCurrentEdgeReq& x) { @@ -3396,10 +3396,10 @@ void CreateLockedCurrentEdgeResp::unpack(BincodeBuf& buf) { creationTime.unpack(buf); } void CreateLockedCurrentEdgeResp::clear() { - creationTime = EggsTime(); + creationTime = TernTime(); } bool CreateLockedCurrentEdgeResp::operator==(const CreateLockedCurrentEdgeResp& rhs) const { - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const CreateLockedCurrentEdgeResp& x) { @@ -3422,13 +3422,13 @@ void LockCurrentEdgeReq::unpack(BincodeBuf& buf) { void LockCurrentEdgeReq::clear() { dirId = InodeId(); targetId = InodeId(); - creationTime = EggsTime(); + creationTime = TernTime(); name.clear(); } bool LockCurrentEdgeReq::operator==(const LockCurrentEdgeReq& rhs) const { if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; if (name != rhs.name) { return false; }; return true; } @@ -3468,14 +3468,14 @@ void UnlockCurrentEdgeReq::unpack(BincodeBuf& buf) { void UnlockCurrentEdgeReq::clear() { dirId = InodeId(); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); targetId = InodeId(); wasMoved = bool(0); } bool UnlockCurrentEdgeReq::operator==(const UnlockCurrentEdgeReq& rhs) const { if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if ((bool)this->wasMoved != (bool)rhs.wasMoved) { return false; }; return true; @@ -3515,13 +3515,13 @@ void RemoveOwnedSnapshotFileEdgeReq::clear() { ownerId = InodeId(); targetId = InodeId(); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool RemoveOwnedSnapshotFileEdgeReq::operator==(const RemoveOwnedSnapshotFileEdgeReq& rhs) const { if ((InodeId)this->ownerId != (InodeId)rhs.ownerId) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const RemoveOwnedSnapshotFileEdgeReq& x) { @@ -3611,11 +3611,11 @@ void MakeDirectoryResp::unpack(BincodeBuf& buf) { } void MakeDirectoryResp::clear() { id = InodeId(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool MakeDirectoryResp::operator==(const MakeDirectoryResp& rhs) const { if ((InodeId)this->id != (InodeId)rhs.id) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const MakeDirectoryResp& x) { @@ -3643,7 +3643,7 @@ void RenameFileReq::clear() { targetId = InodeId(); oldOwnerId = InodeId(); oldName.clear(); - oldCreationTime = EggsTime(); + oldCreationTime = TernTime(); newOwnerId = InodeId(); newName.clear(); } @@ -3651,7 +3651,7 @@ bool RenameFileReq::operator==(const RenameFileReq& rhs) const { if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if ((InodeId)this->oldOwnerId != (InodeId)rhs.oldOwnerId) { return false; }; if (oldName != rhs.oldName) { return false; }; - if ((EggsTime)this->oldCreationTime != (EggsTime)rhs.oldCreationTime) { return false; }; + if ((TernTime)this->oldCreationTime != (TernTime)rhs.oldCreationTime) { return false; }; if ((InodeId)this->newOwnerId != (InodeId)rhs.newOwnerId) { return false; }; if (newName != rhs.newName) { return false; }; return true; @@ -3668,10 +3668,10 @@ void RenameFileResp::unpack(BincodeBuf& buf) { creationTime.unpack(buf); } void RenameFileResp::clear() { - creationTime = EggsTime(); + creationTime = TernTime(); } bool RenameFileResp::operator==(const RenameFileResp& rhs) const { - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const RenameFileResp& x) { @@ -3694,13 +3694,13 @@ void SoftUnlinkDirectoryReq::unpack(BincodeBuf& buf) { void SoftUnlinkDirectoryReq::clear() { ownerId = InodeId(); targetId = InodeId(); - creationTime = EggsTime(); + creationTime = TernTime(); name.clear(); } bool SoftUnlinkDirectoryReq::operator==(const SoftUnlinkDirectoryReq& rhs) const { if ((InodeId)this->ownerId != (InodeId)rhs.ownerId) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; if (name != rhs.name) { return false; }; return true; } @@ -3743,7 +3743,7 @@ void RenameDirectoryReq::clear() { targetId = InodeId(); oldOwnerId = InodeId(); oldName.clear(); - oldCreationTime = EggsTime(); + oldCreationTime = TernTime(); newOwnerId = InodeId(); newName.clear(); } @@ -3751,7 +3751,7 @@ bool RenameDirectoryReq::operator==(const RenameDirectoryReq& rhs) const { if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if ((InodeId)this->oldOwnerId != (InodeId)rhs.oldOwnerId) { return false; }; if (oldName != rhs.oldName) { return false; }; - if ((EggsTime)this->oldCreationTime != (EggsTime)rhs.oldCreationTime) { return false; }; + if ((TernTime)this->oldCreationTime != (TernTime)rhs.oldCreationTime) { return false; }; if ((InodeId)this->newOwnerId != (InodeId)rhs.newOwnerId) { return false; }; if (newName != rhs.newName) { return false; }; return true; @@ -3768,10 +3768,10 @@ void RenameDirectoryResp::unpack(BincodeBuf& buf) { creationTime.unpack(buf); } void RenameDirectoryResp::clear() { - creationTime = EggsTime(); + creationTime = TernTime(); } bool RenameDirectoryResp::operator==(const RenameDirectoryResp& rhs) const { - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const RenameDirectoryResp& x) { @@ -3827,13 +3827,13 @@ void CrossShardHardUnlinkFileReq::clear() { ownerId = InodeId(); targetId = InodeId(); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool CrossShardHardUnlinkFileReq::operator==(const CrossShardHardUnlinkFileReq& rhs) const { if ((InodeId)this->ownerId != (InodeId)rhs.ownerId) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const CrossShardHardUnlinkFileReq& x) { @@ -3943,11 +3943,11 @@ void LocalCdcResp::unpack(BincodeBuf& buf) { } void LocalCdcResp::clear() { addrs.clear(); - lastSeen = EggsTime(); + lastSeen = TernTime(); } bool LocalCdcResp::operator==(const LocalCdcResp& rhs) const { if (addrs != rhs.addrs) { return false; }; - if ((EggsTime)this->lastSeen != (EggsTime)rhs.lastSeen) { return false; }; + if ((TernTime)this->lastSeen != (TernTime)rhs.lastSeen) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const LocalCdcResp& x) { @@ -4042,10 +4042,10 @@ void LocalChangedBlockServicesReq::unpack(BincodeBuf& buf) { changedSince.unpack(buf); } void LocalChangedBlockServicesReq::clear() { - changedSince = EggsTime(); + changedSince = TernTime(); } bool LocalChangedBlockServicesReq::operator==(const LocalChangedBlockServicesReq& rhs) const { - if ((EggsTime)this->changedSince != (EggsTime)rhs.changedSince) { return false; }; + if ((TernTime)this->changedSince != (TernTime)rhs.changedSince) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const LocalChangedBlockServicesReq& x) { @@ -4062,11 +4062,11 @@ void LocalChangedBlockServicesResp::unpack(BincodeBuf& buf) { buf.unpackList(blockServices); } void LocalChangedBlockServicesResp::clear() { - lastChange = EggsTime(); + lastChange = TernTime(); blockServices.clear(); } bool LocalChangedBlockServicesResp::operator==(const LocalChangedBlockServicesResp& rhs) const { - if ((EggsTime)this->lastChange != (EggsTime)rhs.lastChange) { return false; }; + if ((TernTime)this->lastChange != (TernTime)rhs.lastChange) { return false; }; if (blockServices != rhs.blockServices) { return false; }; return true; } @@ -4349,11 +4349,11 @@ void ChangedBlockServicesAtLocationReq::unpack(BincodeBuf& buf) { } void ChangedBlockServicesAtLocationReq::clear() { locationId = uint8_t(0); - changedSince = EggsTime(); + changedSince = TernTime(); } bool ChangedBlockServicesAtLocationReq::operator==(const ChangedBlockServicesAtLocationReq& rhs) const { if ((uint8_t)this->locationId != (uint8_t)rhs.locationId) { return false; }; - if ((EggsTime)this->changedSince != (EggsTime)rhs.changedSince) { return false; }; + if ((TernTime)this->changedSince != (TernTime)rhs.changedSince) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const ChangedBlockServicesAtLocationReq& x) { @@ -4370,11 +4370,11 @@ void ChangedBlockServicesAtLocationResp::unpack(BincodeBuf& buf) { buf.unpackList(blockServices); } void ChangedBlockServicesAtLocationResp::clear() { - lastChange = EggsTime(); + lastChange = TernTime(); blockServices.clear(); } bool ChangedBlockServicesAtLocationResp::operator==(const ChangedBlockServicesAtLocationResp& rhs) const { - if ((EggsTime)this->lastChange != (EggsTime)rhs.lastChange) { return false; }; + if ((TernTime)this->lastChange != (TernTime)rhs.lastChange) { return false; }; if (blockServices != rhs.blockServices) { return false; }; return true; } @@ -4447,11 +4447,11 @@ void CdcAtLocationResp::unpack(BincodeBuf& buf) { } void CdcAtLocationResp::clear() { addrs.clear(); - lastSeen = EggsTime(); + lastSeen = TernTime(); } bool CdcAtLocationResp::operator==(const CdcAtLocationResp& rhs) const { if (addrs != rhs.addrs) { return false; }; - if ((EggsTime)this->lastSeen != (EggsTime)rhs.lastSeen) { return false; }; + if ((TernTime)this->lastSeen != (TernTime)rhs.lastSeen) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const CdcAtLocationResp& x) { @@ -5194,16 +5194,16 @@ std::ostream& operator<<(std::ostream& out, const LogWriteReq& x) { } void LogWriteResp::pack(BincodeBuf& buf) const { - buf.packScalar(result); + buf.packScalar(result); } void LogWriteResp::unpack(BincodeBuf& buf) { - result = buf.unpackScalar(); + result = buf.unpackScalar(); } void LogWriteResp::clear() { - result = EggsError(0); + result = TernError(0); } bool LogWriteResp::operator==(const LogWriteResp& rhs) const { - if ((EggsError)this->result != (EggsError)rhs.result) { return false; }; + if ((TernError)this->result != (TernError)rhs.result) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const LogWriteResp& x) { @@ -5234,16 +5234,16 @@ std::ostream& operator<<(std::ostream& out, const ReleaseReq& x) { } void ReleaseResp::pack(BincodeBuf& buf) const { - buf.packScalar(result); + buf.packScalar(result); } void ReleaseResp::unpack(BincodeBuf& buf) { - result = buf.unpackScalar(); + result = buf.unpackScalar(); } void ReleaseResp::clear() { - result = EggsError(0); + result = TernError(0); } bool ReleaseResp::operator==(const ReleaseResp& rhs) const { - if ((EggsError)this->result != (EggsError)rhs.result) { return false; }; + if ((TernError)this->result != (TernError)rhs.result) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const ReleaseResp& x) { @@ -5270,19 +5270,19 @@ std::ostream& operator<<(std::ostream& out, const LogReadReq& x) { } void LogReadResp::pack(BincodeBuf& buf) const { - buf.packScalar(result); + buf.packScalar(result); buf.packList(value); } void LogReadResp::unpack(BincodeBuf& buf) { - result = buf.unpackScalar(); + result = buf.unpackScalar(); buf.unpackList(value); } void LogReadResp::clear() { - result = EggsError(0); + result = TernError(0); value.clear(); } bool LogReadResp::operator==(const LogReadResp& rhs) const { - if ((EggsError)this->result != (EggsError)rhs.result) { return false; }; + if ((TernError)this->result != (TernError)rhs.result) { return false; }; if (value != rhs.value) { return false; }; return true; } @@ -5310,19 +5310,19 @@ std::ostream& operator<<(std::ostream& out, const NewLeaderReq& x) { } void NewLeaderResp::pack(BincodeBuf& buf) const { - buf.packScalar(result); + buf.packScalar(result); lastReleased.pack(buf); } void NewLeaderResp::unpack(BincodeBuf& buf) { - result = buf.unpackScalar(); + result = buf.unpackScalar(); lastReleased.unpack(buf); } void NewLeaderResp::clear() { - result = EggsError(0); + result = TernError(0); lastReleased = LogIdx(); } bool NewLeaderResp::operator==(const NewLeaderResp& rhs) const { - if ((EggsError)this->result != (EggsError)rhs.result) { return false; }; + if ((TernError)this->result != (TernError)rhs.result) { return false; }; if ((LogIdx)this->lastReleased != (LogIdx)rhs.lastReleased) { return false; }; return true; } @@ -5354,16 +5354,16 @@ std::ostream& operator<<(std::ostream& out, const NewLeaderConfirmReq& x) { } void NewLeaderConfirmResp::pack(BincodeBuf& buf) const { - buf.packScalar(result); + buf.packScalar(result); } void NewLeaderConfirmResp::unpack(BincodeBuf& buf) { - result = buf.unpackScalar(); + result = buf.unpackScalar(); } void NewLeaderConfirmResp::clear() { - result = EggsError(0); + result = TernError(0); } bool NewLeaderConfirmResp::operator==(const NewLeaderConfirmResp& rhs) const { - if ((EggsError)this->result != (EggsError)rhs.result) { return false; }; + if ((TernError)this->result != (TernError)rhs.result) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const NewLeaderConfirmResp& x) { @@ -5394,19 +5394,19 @@ std::ostream& operator<<(std::ostream& out, const LogRecoveryReadReq& x) { } void LogRecoveryReadResp::pack(BincodeBuf& buf) const { - buf.packScalar(result); + buf.packScalar(result); buf.packList(value); } void LogRecoveryReadResp::unpack(BincodeBuf& buf) { - result = buf.unpackScalar(); + result = buf.unpackScalar(); buf.unpackList(value); } void LogRecoveryReadResp::clear() { - result = EggsError(0); + result = TernError(0); value.clear(); } bool LogRecoveryReadResp::operator==(const LogRecoveryReadResp& rhs) const { - if ((EggsError)this->result != (EggsError)rhs.result) { return false; }; + if ((TernError)this->result != (TernError)rhs.result) { return false; }; if (value != rhs.value) { return false; }; return true; } @@ -5442,16 +5442,16 @@ std::ostream& operator<<(std::ostream& out, const LogRecoveryWriteReq& x) { } void LogRecoveryWriteResp::pack(BincodeBuf& buf) const { - buf.packScalar(result); + buf.packScalar(result); } void LogRecoveryWriteResp::unpack(BincodeBuf& buf) { - result = buf.unpackScalar(); + result = buf.unpackScalar(); } void LogRecoveryWriteResp::clear() { - result = EggsError(0); + result = TernError(0); } bool LogRecoveryWriteResp::operator==(const LogRecoveryWriteResp& rhs) const { - if ((EggsError)this->result != (EggsError)rhs.result) { return false; }; + if ((TernError)this->result != (TernError)rhs.result) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const LogRecoveryWriteResp& x) { @@ -6005,7 +6005,7 @@ void ShardReqContainer::operator=(const ShardReqContainer& other) { setMakeFileTransient() = other.getMakeFileTransient(); break; default: - throw EGGS_EXCEPTION("bad ShardMessageKind kind %s", other.kind()); + throw TERN_EXCEPTION("bad ShardMessageKind kind %s", other.kind()); } } @@ -6106,7 +6106,7 @@ size_t ShardReqContainer::packedSize() const { case ShardMessageKind::MAKE_FILE_TRANSIENT: return sizeof(ShardMessageKind) + std::get<43>(_data).packedSize(); default: - throw EGGS_EXCEPTION("bad ShardMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad ShardMessageKind kind %s", _kind); } } @@ -6246,7 +6246,7 @@ void ShardReqContainer::pack(BincodeBuf& buf) const { std::get<43>(_data).pack(buf); break; default: - throw EGGS_EXCEPTION("bad ShardMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad ShardMessageKind kind %s", _kind); } } @@ -6625,16 +6625,16 @@ std::ostream& operator<<(std::ostream& out, const ShardReqContainer& x) { out << "EMPTY"; break; default: - throw EGGS_EXCEPTION("bad ShardMessageKind kind %s", x.kind()); + throw TERN_EXCEPTION("bad ShardMessageKind kind %s", x.kind()); } return out; } -const EggsError& ShardRespContainer::getError() const { +const TernError& ShardRespContainer::getError() const { ALWAYS_ASSERT(_kind == ShardMessageKind::ERROR, "%s != %s", _kind, ShardMessageKind::ERROR); return std::get<0>(_data); } -EggsError& ShardRespContainer::setError() { +TernError& ShardRespContainer::setError() { _kind = ShardMessageKind::ERROR; auto& x = _data.emplace<0>(); return x; @@ -7188,7 +7188,7 @@ void ShardRespContainer::operator=(const ShardRespContainer& other) { setMakeFileTransient() = other.getMakeFileTransient(); break; default: - throw EGGS_EXCEPTION("bad ShardMessageKind kind %s", other.kind()); + throw TERN_EXCEPTION("bad ShardMessageKind kind %s", other.kind()); } } @@ -7201,7 +7201,7 @@ void ShardRespContainer::operator=(ShardRespContainer&& other) { size_t ShardRespContainer::packedSize() const { switch (_kind) { case ShardMessageKind::ERROR: - return sizeof(ShardMessageKind) + sizeof(EggsError); + return sizeof(ShardMessageKind) + sizeof(TernError); case ShardMessageKind::LOOKUP: return sizeof(ShardMessageKind) + std::get<1>(_data).packedSize(); case ShardMessageKind::STAT_FILE: @@ -7291,7 +7291,7 @@ size_t ShardRespContainer::packedSize() const { case ShardMessageKind::MAKE_FILE_TRANSIENT: return sizeof(ShardMessageKind) + std::get<44>(_data).packedSize(); default: - throw EGGS_EXCEPTION("bad ShardMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad ShardMessageKind kind %s", _kind); } } @@ -7299,7 +7299,7 @@ void ShardRespContainer::pack(BincodeBuf& buf) const { buf.packScalar(_kind); switch (_kind) { case ShardMessageKind::ERROR: - buf.packScalar(std::get<0>(_data)); + buf.packScalar(std::get<0>(_data)); break; case ShardMessageKind::LOOKUP: std::get<1>(_data).pack(buf); @@ -7434,7 +7434,7 @@ void ShardRespContainer::pack(BincodeBuf& buf) const { std::get<44>(_data).pack(buf); break; default: - throw EGGS_EXCEPTION("bad ShardMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad ShardMessageKind kind %s", _kind); } } @@ -7442,7 +7442,7 @@ void ShardRespContainer::unpack(BincodeBuf& buf) { _kind = buf.unpackScalar(); switch (_kind) { case ShardMessageKind::ERROR: - _data.emplace<0>(buf.unpackScalar()); + _data.emplace<0>(buf.unpackScalar()); break; case ShardMessageKind::LOOKUP: _data.emplace<1>().unpack(buf); @@ -7821,7 +7821,7 @@ std::ostream& operator<<(std::ostream& out, const ShardRespContainer& x) { out << "EMPTY"; break; default: - throw EGGS_EXCEPTION("bad ShardMessageKind kind %s", x.kind()); + throw TERN_EXCEPTION("bad ShardMessageKind kind %s", x.kind()); } return out; } @@ -7928,7 +7928,7 @@ void CDCReqContainer::operator=(const CDCReqContainer& other) { setCdcSnapshot() = other.getCdcSnapshot(); break; default: - throw EGGS_EXCEPTION("bad CDCMessageKind kind %s", other.kind()); + throw TERN_EXCEPTION("bad CDCMessageKind kind %s", other.kind()); } } @@ -7955,7 +7955,7 @@ size_t CDCReqContainer::packedSize() const { case CDCMessageKind::CDC_SNAPSHOT: return sizeof(CDCMessageKind) + std::get<6>(_data).packedSize(); default: - throw EGGS_EXCEPTION("bad CDCMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad CDCMessageKind kind %s", _kind); } } @@ -7984,7 +7984,7 @@ void CDCReqContainer::pack(BincodeBuf& buf) const { std::get<6>(_data).pack(buf); break; default: - throw EGGS_EXCEPTION("bad CDCMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad CDCMessageKind kind %s", _kind); } } @@ -8067,16 +8067,16 @@ std::ostream& operator<<(std::ostream& out, const CDCReqContainer& x) { out << "EMPTY"; break; default: - throw EGGS_EXCEPTION("bad CDCMessageKind kind %s", x.kind()); + throw TERN_EXCEPTION("bad CDCMessageKind kind %s", x.kind()); } return out; } -const EggsError& CDCRespContainer::getError() const { +const TernError& CDCRespContainer::getError() const { ALWAYS_ASSERT(_kind == CDCMessageKind::ERROR, "%s != %s", _kind, CDCMessageKind::ERROR); return std::get<0>(_data); } -EggsError& CDCRespContainer::setError() { +TernError& CDCRespContainer::setError() { _kind = CDCMessageKind::ERROR; auto& x = _data.emplace<0>(); return x; @@ -8186,7 +8186,7 @@ void CDCRespContainer::operator=(const CDCRespContainer& other) { setCdcSnapshot() = other.getCdcSnapshot(); break; default: - throw EGGS_EXCEPTION("bad CDCMessageKind kind %s", other.kind()); + throw TERN_EXCEPTION("bad CDCMessageKind kind %s", other.kind()); } } @@ -8199,7 +8199,7 @@ void CDCRespContainer::operator=(CDCRespContainer&& other) { size_t CDCRespContainer::packedSize() const { switch (_kind) { case CDCMessageKind::ERROR: - return sizeof(CDCMessageKind) + sizeof(EggsError); + return sizeof(CDCMessageKind) + sizeof(TernError); case CDCMessageKind::MAKE_DIRECTORY: return sizeof(CDCMessageKind) + std::get<1>(_data).packedSize(); case CDCMessageKind::RENAME_FILE: @@ -8215,7 +8215,7 @@ size_t CDCRespContainer::packedSize() const { case CDCMessageKind::CDC_SNAPSHOT: return sizeof(CDCMessageKind) + std::get<7>(_data).packedSize(); default: - throw EGGS_EXCEPTION("bad CDCMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad CDCMessageKind kind %s", _kind); } } @@ -8223,7 +8223,7 @@ void CDCRespContainer::pack(BincodeBuf& buf) const { buf.packScalar(_kind); switch (_kind) { case CDCMessageKind::ERROR: - buf.packScalar(std::get<0>(_data)); + buf.packScalar(std::get<0>(_data)); break; case CDCMessageKind::MAKE_DIRECTORY: std::get<1>(_data).pack(buf); @@ -8247,7 +8247,7 @@ void CDCRespContainer::pack(BincodeBuf& buf) const { std::get<7>(_data).pack(buf); break; default: - throw EGGS_EXCEPTION("bad CDCMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad CDCMessageKind kind %s", _kind); } } @@ -8255,7 +8255,7 @@ void CDCRespContainer::unpack(BincodeBuf& buf) { _kind = buf.unpackScalar(); switch (_kind) { case CDCMessageKind::ERROR: - _data.emplace<0>(buf.unpackScalar()); + _data.emplace<0>(buf.unpackScalar()); break; case CDCMessageKind::MAKE_DIRECTORY: _data.emplace<1>().unpack(buf); @@ -8338,7 +8338,7 @@ std::ostream& operator<<(std::ostream& out, const CDCRespContainer& x) { out << "EMPTY"; break; default: - throw EGGS_EXCEPTION("bad CDCMessageKind kind %s", x.kind()); + throw TERN_EXCEPTION("bad CDCMessageKind kind %s", x.kind()); } return out; } @@ -8697,7 +8697,7 @@ void ShuckleReqContainer::operator=(const ShuckleReqContainer& other) { setUpdateBlockServicePath() = other.getUpdateBlockServicePath(); break; default: - throw EGGS_EXCEPTION("bad ShuckleMessageKind kind %s", other.kind()); + throw TERN_EXCEPTION("bad ShuckleMessageKind kind %s", other.kind()); } } @@ -8766,7 +8766,7 @@ size_t ShuckleReqContainer::packedSize() const { case ShuckleMessageKind::UPDATE_BLOCK_SERVICE_PATH: return sizeof(ShuckleMessageKind) + std::get<27>(_data).packedSize(); default: - throw EGGS_EXCEPTION("bad ShuckleMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad ShuckleMessageKind kind %s", _kind); } } @@ -8858,7 +8858,7 @@ void ShuckleReqContainer::pack(BincodeBuf& buf) const { std::get<27>(_data).pack(buf); break; default: - throw EGGS_EXCEPTION("bad ShuckleMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad ShuckleMessageKind kind %s", _kind); } } @@ -9109,16 +9109,16 @@ std::ostream& operator<<(std::ostream& out, const ShuckleReqContainer& x) { out << "EMPTY"; break; default: - throw EGGS_EXCEPTION("bad ShuckleMessageKind kind %s", x.kind()); + throw TERN_EXCEPTION("bad ShuckleMessageKind kind %s", x.kind()); } return out; } -const EggsError& ShuckleRespContainer::getError() const { +const TernError& ShuckleRespContainer::getError() const { ALWAYS_ASSERT(_kind == ShuckleMessageKind::ERROR, "%s != %s", _kind, ShuckleMessageKind::ERROR); return std::get<0>(_data); } -EggsError& ShuckleRespContainer::setError() { +TernError& ShuckleRespContainer::setError() { _kind = ShuckleMessageKind::ERROR; auto& x = _data.emplace<0>(); return x; @@ -9480,7 +9480,7 @@ void ShuckleRespContainer::operator=(const ShuckleRespContainer& other) { setUpdateBlockServicePath() = other.getUpdateBlockServicePath(); break; default: - throw EGGS_EXCEPTION("bad ShuckleMessageKind kind %s", other.kind()); + throw TERN_EXCEPTION("bad ShuckleMessageKind kind %s", other.kind()); } } @@ -9493,7 +9493,7 @@ void ShuckleRespContainer::operator=(ShuckleRespContainer&& other) { size_t ShuckleRespContainer::packedSize() const { switch (_kind) { case ShuckleMessageKind::ERROR: - return sizeof(ShuckleMessageKind) + sizeof(EggsError); + return sizeof(ShuckleMessageKind) + sizeof(TernError); case ShuckleMessageKind::LOCAL_SHARDS: return sizeof(ShuckleMessageKind) + std::get<1>(_data).packedSize(); case ShuckleMessageKind::LOCAL_CDC: @@ -9551,7 +9551,7 @@ size_t ShuckleRespContainer::packedSize() const { case ShuckleMessageKind::UPDATE_BLOCK_SERVICE_PATH: return sizeof(ShuckleMessageKind) + std::get<28>(_data).packedSize(); default: - throw EGGS_EXCEPTION("bad ShuckleMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad ShuckleMessageKind kind %s", _kind); } } @@ -9559,7 +9559,7 @@ void ShuckleRespContainer::pack(BincodeBuf& buf) const { buf.packScalar(_kind); switch (_kind) { case ShuckleMessageKind::ERROR: - buf.packScalar(std::get<0>(_data)); + buf.packScalar(std::get<0>(_data)); break; case ShuckleMessageKind::LOCAL_SHARDS: std::get<1>(_data).pack(buf); @@ -9646,7 +9646,7 @@ void ShuckleRespContainer::pack(BincodeBuf& buf) const { std::get<28>(_data).pack(buf); break; default: - throw EGGS_EXCEPTION("bad ShuckleMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad ShuckleMessageKind kind %s", _kind); } } @@ -9654,7 +9654,7 @@ void ShuckleRespContainer::unpack(BincodeBuf& buf) { _kind = buf.unpackScalar(); switch (_kind) { case ShuckleMessageKind::ERROR: - _data.emplace<0>(buf.unpackScalar()); + _data.emplace<0>(buf.unpackScalar()); break; case ShuckleMessageKind::LOCAL_SHARDS: _data.emplace<1>().unpack(buf); @@ -9905,7 +9905,7 @@ std::ostream& operator<<(std::ostream& out, const ShuckleRespContainer& x) { out << "EMPTY"; break; default: - throw EGGS_EXCEPTION("bad ShuckleMessageKind kind %s", x.kind()); + throw TERN_EXCEPTION("bad ShuckleMessageKind kind %s", x.kind()); } return out; } @@ -10012,7 +10012,7 @@ void LogReqContainer::operator=(const LogReqContainer& other) { setLogRecoveryWrite() = other.getLogRecoveryWrite(); break; default: - throw EGGS_EXCEPTION("bad LogMessageKind kind %s", other.kind()); + throw TERN_EXCEPTION("bad LogMessageKind kind %s", other.kind()); } } @@ -10039,7 +10039,7 @@ size_t LogReqContainer::packedSize() const { case LogMessageKind::LOG_RECOVERY_WRITE: return sizeof(LogMessageKind) + std::get<6>(_data).packedSize(); default: - throw EGGS_EXCEPTION("bad LogMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad LogMessageKind kind %s", _kind); } } @@ -10068,7 +10068,7 @@ void LogReqContainer::pack(BincodeBuf& buf) const { std::get<6>(_data).pack(buf); break; default: - throw EGGS_EXCEPTION("bad LogMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad LogMessageKind kind %s", _kind); } } @@ -10151,16 +10151,16 @@ std::ostream& operator<<(std::ostream& out, const LogReqContainer& x) { out << "EMPTY"; break; default: - throw EGGS_EXCEPTION("bad LogMessageKind kind %s", x.kind()); + throw TERN_EXCEPTION("bad LogMessageKind kind %s", x.kind()); } return out; } -const EggsError& LogRespContainer::getError() const { +const TernError& LogRespContainer::getError() const { ALWAYS_ASSERT(_kind == LogMessageKind::ERROR, "%s != %s", _kind, LogMessageKind::ERROR); return std::get<0>(_data); } -EggsError& LogRespContainer::setError() { +TernError& LogRespContainer::setError() { _kind = LogMessageKind::ERROR; auto& x = _data.emplace<0>(); return x; @@ -10270,7 +10270,7 @@ void LogRespContainer::operator=(const LogRespContainer& other) { setLogRecoveryWrite() = other.getLogRecoveryWrite(); break; default: - throw EGGS_EXCEPTION("bad LogMessageKind kind %s", other.kind()); + throw TERN_EXCEPTION("bad LogMessageKind kind %s", other.kind()); } } @@ -10283,7 +10283,7 @@ void LogRespContainer::operator=(LogRespContainer&& other) { size_t LogRespContainer::packedSize() const { switch (_kind) { case LogMessageKind::ERROR: - return sizeof(LogMessageKind) + sizeof(EggsError); + return sizeof(LogMessageKind) + sizeof(TernError); case LogMessageKind::LOG_WRITE: return sizeof(LogMessageKind) + std::get<1>(_data).packedSize(); case LogMessageKind::RELEASE: @@ -10299,7 +10299,7 @@ size_t LogRespContainer::packedSize() const { case LogMessageKind::LOG_RECOVERY_WRITE: return sizeof(LogMessageKind) + std::get<7>(_data).packedSize(); default: - throw EGGS_EXCEPTION("bad LogMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad LogMessageKind kind %s", _kind); } } @@ -10307,7 +10307,7 @@ void LogRespContainer::pack(BincodeBuf& buf) const { buf.packScalar(_kind); switch (_kind) { case LogMessageKind::ERROR: - buf.packScalar(std::get<0>(_data)); + buf.packScalar(std::get<0>(_data)); break; case LogMessageKind::LOG_WRITE: std::get<1>(_data).pack(buf); @@ -10331,7 +10331,7 @@ void LogRespContainer::pack(BincodeBuf& buf) const { std::get<7>(_data).pack(buf); break; default: - throw EGGS_EXCEPTION("bad LogMessageKind kind %s", _kind); + throw TERN_EXCEPTION("bad LogMessageKind kind %s", _kind); } } @@ -10339,7 +10339,7 @@ void LogRespContainer::unpack(BincodeBuf& buf) { _kind = buf.unpackScalar(); switch (_kind) { case LogMessageKind::ERROR: - _data.emplace<0>(buf.unpackScalar()); + _data.emplace<0>(buf.unpackScalar()); break; case LogMessageKind::LOG_WRITE: _data.emplace<1>().unpack(buf); @@ -10422,7 +10422,7 @@ std::ostream& operator<<(std::ostream& out, const LogRespContainer& x) { out << "EMPTY"; break; default: - throw EGGS_EXCEPTION("bad LogMessageKind kind %s", x.kind()); + throw TERN_EXCEPTION("bad LogMessageKind kind %s", x.kind()); } return out; } @@ -10544,12 +10544,12 @@ void ConstructFileEntry::unpack(BincodeBuf& buf) { } void ConstructFileEntry::clear() { type = uint8_t(0); - deadlineTime = EggsTime(); + deadlineTime = TernTime(); note.clear(); } bool ConstructFileEntry::operator==(const ConstructFileEntry& rhs) const { if ((uint8_t)this->type != (uint8_t)rhs.type) { return false; }; - if ((EggsTime)this->deadlineTime != (EggsTime)rhs.deadlineTime) { return false; }; + if ((TernTime)this->deadlineTime != (TernTime)rhs.deadlineTime) { return false; }; if (note != rhs.note) { return false; }; return true; } @@ -10602,14 +10602,14 @@ void SameDirectoryRenameEntry::clear() { dirId = InodeId(); targetId = InodeId(); oldName.clear(); - oldCreationTime = EggsTime(); + oldCreationTime = TernTime(); newName.clear(); } bool SameDirectoryRenameEntry::operator==(const SameDirectoryRenameEntry& rhs) const { if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if (oldName != rhs.oldName) { return false; }; - if ((EggsTime)this->oldCreationTime != (EggsTime)rhs.oldCreationTime) { return false; }; + if ((TernTime)this->oldCreationTime != (TernTime)rhs.oldCreationTime) { return false; }; if (newName != rhs.newName) { return false; }; return true; } @@ -10634,13 +10634,13 @@ void SoftUnlinkFileEntry::clear() { ownerId = InodeId(); fileId = InodeId(); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool SoftUnlinkFileEntry::operator==(const SoftUnlinkFileEntry& rhs) const { if ((InodeId)this->ownerId != (InodeId)rhs.ownerId) { return false; }; if ((InodeId)this->fileId != (InodeId)rhs.fileId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const SoftUnlinkFileEntry& x) { @@ -10690,13 +10690,13 @@ void CreateLockedCurrentEdgeEntry::clear() { dirId = InodeId(); name.clear(); targetId = InodeId(); - oldCreationTime = EggsTime(); + oldCreationTime = TernTime(); } bool CreateLockedCurrentEdgeEntry::operator==(const CreateLockedCurrentEdgeEntry& rhs) const { if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if (name != rhs.name) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; - if ((EggsTime)this->oldCreationTime != (EggsTime)rhs.oldCreationTime) { return false; }; + if ((TernTime)this->oldCreationTime != (TernTime)rhs.oldCreationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const CreateLockedCurrentEdgeEntry& x) { @@ -10721,14 +10721,14 @@ void UnlockCurrentEdgeEntry::unpack(BincodeBuf& buf) { void UnlockCurrentEdgeEntry::clear() { dirId = InodeId(); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); targetId = InodeId(); wasMoved = bool(0); } bool UnlockCurrentEdgeEntry::operator==(const UnlockCurrentEdgeEntry& rhs) const { if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if ((bool)this->wasMoved != (bool)rhs.wasMoved) { return false; }; return true; @@ -10753,13 +10753,13 @@ void LockCurrentEdgeEntry::unpack(BincodeBuf& buf) { void LockCurrentEdgeEntry::clear() { dirId = InodeId(); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); targetId = InodeId(); } bool LockCurrentEdgeEntry::operator==(const LockCurrentEdgeEntry& rhs) const { if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; return true; } @@ -10868,13 +10868,13 @@ void RemoveNonOwnedEdgeEntry::clear() { dirId = InodeId(); targetId = InodeId(); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool RemoveNonOwnedEdgeEntry::operator==(const RemoveNonOwnedEdgeEntry& rhs) const { if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const RemoveNonOwnedEdgeEntry& x) { @@ -10892,11 +10892,11 @@ void ScrapTransientFileEntry::unpack(BincodeBuf& buf) { } void ScrapTransientFileEntry::clear() { id = InodeId(); - deadlineTime = EggsTime(); + deadlineTime = TernTime(); } bool ScrapTransientFileEntry::operator==(const ScrapTransientFileEntry& rhs) const { if ((InodeId)this->id != (InodeId)rhs.id) { return false; }; - if ((EggsTime)this->deadlineTime != (EggsTime)rhs.deadlineTime) { return false; }; + if ((TernTime)this->deadlineTime != (TernTime)rhs.deadlineTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const ScrapTransientFileEntry& x) { @@ -11108,13 +11108,13 @@ void RemoveOwnedSnapshotFileEdgeEntry::clear() { ownerId = InodeId(); targetId = InodeId(); name.clear(); - creationTime = EggsTime(); + creationTime = TernTime(); } bool RemoveOwnedSnapshotFileEdgeEntry::operator==(const RemoveOwnedSnapshotFileEdgeEntry& rhs) const { if ((InodeId)this->ownerId != (InodeId)rhs.ownerId) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const RemoveOwnedSnapshotFileEdgeEntry& x) { @@ -11306,14 +11306,14 @@ void SameDirectoryRenameSnapshotEntry::clear() { dirId = InodeId(); targetId = InodeId(); oldName.clear(); - oldCreationTime = EggsTime(); + oldCreationTime = TernTime(); newName.clear(); } bool SameDirectoryRenameSnapshotEntry::operator==(const SameDirectoryRenameSnapshotEntry& rhs) const { if ((InodeId)this->dirId != (InodeId)rhs.dirId) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if (oldName != rhs.oldName) { return false; }; - if ((EggsTime)this->oldCreationTime != (EggsTime)rhs.oldCreationTime) { return false; }; + if ((TernTime)this->oldCreationTime != (TernTime)rhs.oldCreationTime) { return false; }; if (newName != rhs.newName) { return false; }; return true; } @@ -11436,15 +11436,15 @@ void SameShardHardFileUnlinkEntry::clear() { ownerId = InodeId(); targetId = InodeId(); name.clear(); - creationTime = EggsTime(); - deadlineTime = EggsTime(); + creationTime = TernTime(); + deadlineTime = TernTime(); } bool SameShardHardFileUnlinkEntry::operator==(const SameShardHardFileUnlinkEntry& rhs) const { if ((InodeId)this->ownerId != (InodeId)rhs.ownerId) { return false; }; if ((InodeId)this->targetId != (InodeId)rhs.targetId) { return false; }; if (name != rhs.name) { return false; }; - if ((EggsTime)this->creationTime != (EggsTime)rhs.creationTime) { return false; }; - if ((EggsTime)this->deadlineTime != (EggsTime)rhs.deadlineTime) { return false; }; + if ((TernTime)this->creationTime != (TernTime)rhs.creationTime) { return false; }; + if ((TernTime)this->deadlineTime != (TernTime)rhs.deadlineTime) { return false; }; return true; } std::ostream& operator<<(std::ostream& out, const SameShardHardFileUnlinkEntry& x) { @@ -11464,12 +11464,12 @@ void MakeFileTransientEntry::unpack(BincodeBuf& buf) { } void MakeFileTransientEntry::clear() { id = InodeId(); - deadlineTime = EggsTime(); + deadlineTime = TernTime(); note.clear(); } bool MakeFileTransientEntry::operator==(const MakeFileTransientEntry& rhs) const { if ((InodeId)this->id != (InodeId)rhs.id) { return false; }; - if ((EggsTime)this->deadlineTime != (EggsTime)rhs.deadlineTime) { return false; }; + if ((TernTime)this->deadlineTime != (TernTime)rhs.deadlineTime) { return false; }; if (note != rhs.note) { return false; }; return true; } @@ -11868,7 +11868,7 @@ void ShardLogEntryContainer::operator=(const ShardLogEntryContainer& other) { setMakeFileTransient() = other.getMakeFileTransient(); break; default: - throw EGGS_EXCEPTION("bad ShardLogEntryKind kind %s", other.kind()); + throw TERN_EXCEPTION("bad ShardLogEntryKind kind %s", other.kind()); } } @@ -11943,7 +11943,7 @@ size_t ShardLogEntryContainer::packedSize() const { case ShardLogEntryKind::MAKE_FILE_TRANSIENT: return sizeof(ShardLogEntryKind) + std::get<30>(_data).packedSize(); default: - throw EGGS_EXCEPTION("bad ShardLogEntryKind kind %s", _kind); + throw TERN_EXCEPTION("bad ShardLogEntryKind kind %s", _kind); } } @@ -12044,7 +12044,7 @@ void ShardLogEntryContainer::pack(BincodeBuf& buf) const { std::get<30>(_data).pack(buf); break; default: - throw EGGS_EXCEPTION("bad ShardLogEntryKind kind %s", _kind); + throw TERN_EXCEPTION("bad ShardLogEntryKind kind %s", _kind); } } @@ -12319,7 +12319,7 @@ std::ostream& operator<<(std::ostream& out, const ShardLogEntryContainer& x) { out << "EMPTY"; break; default: - throw EGGS_EXCEPTION("bad ShardLogEntryKind kind %s", x.kind()); + throw TERN_EXCEPTION("bad ShardLogEntryKind kind %s", x.kind()); } return out; } diff --git a/cpp/core/MsgsGen.hpp b/cpp/core/MsgsGen.hpp index 88212efc..65589578 100644 --- a/cpp/core/MsgsGen.hpp +++ b/cpp/core/MsgsGen.hpp @@ -3,7 +3,7 @@ #pragma once #include "Msgs.hpp" -enum class EggsError : uint16_t { +enum class TernError : uint16_t { NO_ERROR = 0, INTERNAL_ERROR = 10, FATAL_ERROR = 11, @@ -99,104 +99,104 @@ enum class EggsError : uint16_t { SWAP_BLOCKS_MISMATCHING_LOCATION = 101, }; -std::ostream& operator<<(std::ostream& out, EggsError err); +std::ostream& operator<<(std::ostream& out, TernError err); -const std::vector allEggsErrors { - EggsError::INTERNAL_ERROR, - EggsError::FATAL_ERROR, - EggsError::TIMEOUT, - EggsError::MALFORMED_REQUEST, - EggsError::MALFORMED_RESPONSE, - EggsError::NOT_AUTHORISED, - EggsError::UNRECOGNIZED_REQUEST, - EggsError::FILE_NOT_FOUND, - EggsError::DIRECTORY_NOT_FOUND, - EggsError::NAME_NOT_FOUND, - EggsError::EDGE_NOT_FOUND, - EggsError::EDGE_IS_LOCKED, - EggsError::TYPE_IS_DIRECTORY, - EggsError::TYPE_IS_NOT_DIRECTORY, - EggsError::BAD_COOKIE, - EggsError::INCONSISTENT_STORAGE_CLASS_PARITY, - EggsError::LAST_SPAN_STATE_NOT_CLEAN, - EggsError::COULD_NOT_PICK_BLOCK_SERVICES, - EggsError::BAD_SPAN_BODY, - EggsError::SPAN_NOT_FOUND, - EggsError::BLOCK_SERVICE_NOT_FOUND, - EggsError::CANNOT_CERTIFY_BLOCKLESS_SPAN, - EggsError::BAD_NUMBER_OF_BLOCKS_PROOFS, - EggsError::BAD_BLOCK_PROOF, - EggsError::CANNOT_OVERRIDE_NAME, - EggsError::NAME_IS_LOCKED, - EggsError::MTIME_IS_TOO_RECENT, - EggsError::MISMATCHING_TARGET, - EggsError::MISMATCHING_OWNER, - EggsError::MISMATCHING_CREATION_TIME, - EggsError::DIRECTORY_NOT_EMPTY, - EggsError::FILE_IS_TRANSIENT, - EggsError::OLD_DIRECTORY_NOT_FOUND, - EggsError::NEW_DIRECTORY_NOT_FOUND, - EggsError::LOOP_IN_DIRECTORY_RENAME, - EggsError::DIRECTORY_HAS_OWNER, - EggsError::FILE_IS_NOT_TRANSIENT, - EggsError::FILE_NOT_EMPTY, - EggsError::CANNOT_REMOVE_ROOT_DIRECTORY, - EggsError::FILE_EMPTY, - EggsError::CANNOT_REMOVE_DIRTY_SPAN, - EggsError::BAD_SHARD, - EggsError::BAD_NAME, - EggsError::MORE_RECENT_SNAPSHOT_EDGE, - EggsError::MORE_RECENT_CURRENT_EDGE, - EggsError::BAD_DIRECTORY_INFO, - EggsError::DEADLINE_NOT_PASSED, - EggsError::SAME_SOURCE_AND_DESTINATION, - EggsError::SAME_DIRECTORIES, - EggsError::SAME_SHARD, - EggsError::BAD_PROTOCOL_VERSION, - EggsError::BAD_CERTIFICATE, - EggsError::BLOCK_TOO_RECENT_FOR_DELETION, - EggsError::BLOCK_FETCH_OUT_OF_BOUNDS, - EggsError::BAD_BLOCK_CRC, - EggsError::BLOCK_TOO_BIG, - EggsError::BLOCK_NOT_FOUND, - EggsError::CANNOT_UNSET_DECOMMISSIONED, - EggsError::CANNOT_REGISTER_DECOMMISSIONED_OR_STALE, - EggsError::BLOCK_TOO_OLD_FOR_WRITE, - EggsError::BLOCK_IO_ERROR_DEVICE, - EggsError::BLOCK_IO_ERROR_FILE, - EggsError::INVALID_REPLICA, - EggsError::DIFFERENT_ADDRS_INFO, - EggsError::LEADER_PREEMPTED, - EggsError::LOG_ENTRY_MISSING, - EggsError::LOG_ENTRY_TRIMMED, - EggsError::LOG_ENTRY_UNRELEASED, - EggsError::LOG_ENTRY_RELEASED, - EggsError::AUTO_DECOMMISSION_FORBIDDEN, - EggsError::INCONSISTENT_BLOCK_SERVICE_REGISTRATION, - EggsError::SWAP_BLOCKS_INLINE_STORAGE, - EggsError::SWAP_BLOCKS_MISMATCHING_SIZE, - EggsError::SWAP_BLOCKS_MISMATCHING_STATE, - EggsError::SWAP_BLOCKS_MISMATCHING_CRC, - EggsError::SWAP_BLOCKS_DUPLICATE_BLOCK_SERVICE, - EggsError::SWAP_SPANS_INLINE_STORAGE, - EggsError::SWAP_SPANS_MISMATCHING_SIZE, - EggsError::SWAP_SPANS_NOT_CLEAN, - EggsError::SWAP_SPANS_MISMATCHING_CRC, - EggsError::SWAP_SPANS_MISMATCHING_BLOCKS, - EggsError::EDGE_NOT_OWNED, - EggsError::CANNOT_CREATE_DB_SNAPSHOT, - EggsError::BLOCK_SIZE_NOT_MULTIPLE_OF_PAGE_SIZE, - EggsError::SWAP_BLOCKS_DUPLICATE_FAILURE_DOMAIN, - EggsError::TRANSIENT_LOCATION_COUNT, - EggsError::ADD_SPAN_LOCATION_INLINE_STORAGE, - EggsError::ADD_SPAN_LOCATION_MISMATCHING_SIZE, - EggsError::ADD_SPAN_LOCATION_NOT_CLEAN, - EggsError::ADD_SPAN_LOCATION_MISMATCHING_CRC, - EggsError::ADD_SPAN_LOCATION_EXISTS, - EggsError::SWAP_BLOCKS_MISMATCHING_LOCATION, +const std::vector allTernErrors { + TernError::INTERNAL_ERROR, + TernError::FATAL_ERROR, + TernError::TIMEOUT, + TernError::MALFORMED_REQUEST, + TernError::MALFORMED_RESPONSE, + TernError::NOT_AUTHORISED, + TernError::UNRECOGNIZED_REQUEST, + TernError::FILE_NOT_FOUND, + TernError::DIRECTORY_NOT_FOUND, + TernError::NAME_NOT_FOUND, + TernError::EDGE_NOT_FOUND, + TernError::EDGE_IS_LOCKED, + TernError::TYPE_IS_DIRECTORY, + TernError::TYPE_IS_NOT_DIRECTORY, + TernError::BAD_COOKIE, + TernError::INCONSISTENT_STORAGE_CLASS_PARITY, + TernError::LAST_SPAN_STATE_NOT_CLEAN, + TernError::COULD_NOT_PICK_BLOCK_SERVICES, + TernError::BAD_SPAN_BODY, + TernError::SPAN_NOT_FOUND, + TernError::BLOCK_SERVICE_NOT_FOUND, + TernError::CANNOT_CERTIFY_BLOCKLESS_SPAN, + TernError::BAD_NUMBER_OF_BLOCKS_PROOFS, + TernError::BAD_BLOCK_PROOF, + TernError::CANNOT_OVERRIDE_NAME, + TernError::NAME_IS_LOCKED, + TernError::MTIME_IS_TOO_RECENT, + TernError::MISMATCHING_TARGET, + TernError::MISMATCHING_OWNER, + TernError::MISMATCHING_CREATION_TIME, + TernError::DIRECTORY_NOT_EMPTY, + TernError::FILE_IS_TRANSIENT, + TernError::OLD_DIRECTORY_NOT_FOUND, + TernError::NEW_DIRECTORY_NOT_FOUND, + TernError::LOOP_IN_DIRECTORY_RENAME, + TernError::DIRECTORY_HAS_OWNER, + TernError::FILE_IS_NOT_TRANSIENT, + TernError::FILE_NOT_EMPTY, + TernError::CANNOT_REMOVE_ROOT_DIRECTORY, + TernError::FILE_EMPTY, + TernError::CANNOT_REMOVE_DIRTY_SPAN, + TernError::BAD_SHARD, + TernError::BAD_NAME, + TernError::MORE_RECENT_SNAPSHOT_EDGE, + TernError::MORE_RECENT_CURRENT_EDGE, + TernError::BAD_DIRECTORY_INFO, + TernError::DEADLINE_NOT_PASSED, + TernError::SAME_SOURCE_AND_DESTINATION, + TernError::SAME_DIRECTORIES, + TernError::SAME_SHARD, + TernError::BAD_PROTOCOL_VERSION, + TernError::BAD_CERTIFICATE, + TernError::BLOCK_TOO_RECENT_FOR_DELETION, + TernError::BLOCK_FETCH_OUT_OF_BOUNDS, + TernError::BAD_BLOCK_CRC, + TernError::BLOCK_TOO_BIG, + TernError::BLOCK_NOT_FOUND, + TernError::CANNOT_UNSET_DECOMMISSIONED, + TernError::CANNOT_REGISTER_DECOMMISSIONED_OR_STALE, + TernError::BLOCK_TOO_OLD_FOR_WRITE, + TernError::BLOCK_IO_ERROR_DEVICE, + TernError::BLOCK_IO_ERROR_FILE, + TernError::INVALID_REPLICA, + TernError::DIFFERENT_ADDRS_INFO, + TernError::LEADER_PREEMPTED, + TernError::LOG_ENTRY_MISSING, + TernError::LOG_ENTRY_TRIMMED, + TernError::LOG_ENTRY_UNRELEASED, + TernError::LOG_ENTRY_RELEASED, + TernError::AUTO_DECOMMISSION_FORBIDDEN, + TernError::INCONSISTENT_BLOCK_SERVICE_REGISTRATION, + TernError::SWAP_BLOCKS_INLINE_STORAGE, + TernError::SWAP_BLOCKS_MISMATCHING_SIZE, + TernError::SWAP_BLOCKS_MISMATCHING_STATE, + TernError::SWAP_BLOCKS_MISMATCHING_CRC, + TernError::SWAP_BLOCKS_DUPLICATE_BLOCK_SERVICE, + TernError::SWAP_SPANS_INLINE_STORAGE, + TernError::SWAP_SPANS_MISMATCHING_SIZE, + TernError::SWAP_SPANS_NOT_CLEAN, + TernError::SWAP_SPANS_MISMATCHING_CRC, + TernError::SWAP_SPANS_MISMATCHING_BLOCKS, + TernError::EDGE_NOT_OWNED, + TernError::CANNOT_CREATE_DB_SNAPSHOT, + TernError::BLOCK_SIZE_NOT_MULTIPLE_OF_PAGE_SIZE, + TernError::SWAP_BLOCKS_DUPLICATE_FAILURE_DOMAIN, + TernError::TRANSIENT_LOCATION_COUNT, + TernError::ADD_SPAN_LOCATION_INLINE_STORAGE, + TernError::ADD_SPAN_LOCATION_MISMATCHING_SIZE, + TernError::ADD_SPAN_LOCATION_NOT_CLEAN, + TernError::ADD_SPAN_LOCATION_MISMATCHING_CRC, + TernError::ADD_SPAN_LOCATION_EXISTS, + TernError::SWAP_BLOCKS_MISMATCHING_LOCATION, }; -constexpr int maxEggsError = 102; +constexpr int maxTernError = 102; enum class ShardMessageKind : uint8_t { ERROR = 0, @@ -505,7 +505,7 @@ struct CurrentEdge { InodeId targetId; uint64_t nameHash; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8; // targetId + nameHash + name + creationTime @@ -628,7 +628,7 @@ std::ostream& operator<<(std::ostream& out, const BlockService& x); struct ShardInfo { AddrsInfo addrs; - EggsTime lastSeen; + TernTime lastSeen; static constexpr uint16_t STATIC_SIZE = AddrsInfo::STATIC_SIZE + 8; // addrs + lastSeen @@ -1087,7 +1087,7 @@ struct Edge { InodeIdExtra targetId; uint64_t nameHash; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 1 + 8 + 8 + BincodeBytes::STATIC_SIZE + 8; // current + targetId + nameHash + name + creationTime @@ -1112,7 +1112,7 @@ std::ostream& operator<<(std::ostream& out, const Edge& x); struct FullReadDirCursor { bool current; BincodeBytes startName; - EggsTime startTime; + TernTime startTime; static constexpr uint16_t STATIC_SIZE = 1 + BincodeBytes::STATIC_SIZE + 8; // current + startName + startTime @@ -1135,7 +1135,7 @@ std::ostream& operator<<(std::ostream& out, const FullReadDirCursor& x); struct TransientFile { InodeId id; BincodeFixedBytes<8> cookie; - EggsTime deadlineTime; + TernTime deadlineTime; static constexpr uint16_t STATIC_SIZE = 8 + BincodeFixedBytes<8>::STATIC_SIZE + 8; // id + cookie + deadlineTime @@ -1187,9 +1187,9 @@ struct BlockServiceDeprecatedInfo { uint64_t availableBytes; uint64_t blocks; BincodeBytes path; - EggsTime lastSeen; + TernTime lastSeen; bool hasFiles; - EggsTime flagsLastChanged; + TernTime flagsLastChanged; static constexpr uint16_t STATIC_SIZE = 8 + AddrsInfo::STATIC_SIZE + 1 + FailureDomain::STATIC_SIZE + BincodeFixedBytes<16>::STATIC_SIZE + 1 + 8 + 8 + 8 + BincodeBytes::STATIC_SIZE + 8 + 1 + 8; // id + addrs + storageClass + failureDomain + secretKey + flags + capacityBytes + availableBytes + blocks + path + lastSeen + hasFiles + flagsLastChanged @@ -1307,7 +1307,7 @@ struct FullShardInfo { ShardReplicaId id; bool isLeader; AddrsInfo addrs; - EggsTime lastSeen; + TernTime lastSeen; uint8_t locationId; static constexpr uint16_t STATIC_SIZE = 2 + 1 + AddrsInfo::STATIC_SIZE + 8 + 1; // id + isLeader + addrs + lastSeen + locationId @@ -1376,7 +1376,7 @@ struct CdcInfo { uint8_t locationId; bool isLeader; AddrsInfo addrs; - EggsTime lastSeen; + TernTime lastSeen; static constexpr uint16_t STATIC_SIZE = 1 + 1 + 1 + AddrsInfo::STATIC_SIZE + 8; // replicaId + locationId + isLeader + addrs + lastSeen @@ -1442,7 +1442,7 @@ std::ostream& operator<<(std::ostream& out, const LookupReq& x); struct LookupResp { InodeId targetId; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8 + 8; // targetId + creationTime @@ -1481,8 +1481,8 @@ struct StatFileReq { std::ostream& operator<<(std::ostream& out, const StatFileReq& x); struct StatFileResp { - EggsTime mtime; - EggsTime atime; + TernTime mtime; + TernTime atime; uint64_t size; static constexpr uint16_t STATIC_SIZE = 8 + 8 + 8; // mtime + atime + size @@ -1523,7 +1523,7 @@ struct StatDirectoryReq { std::ostream& operator<<(std::ostream& out, const StatDirectoryReq& x); struct StatDirectoryResp { - EggsTime mtime; + TernTime mtime; InodeId owner; DirectoryInfo info; @@ -1757,7 +1757,7 @@ struct LinkFileReq { std::ostream& operator<<(std::ostream& out, const LinkFileReq& x); struct LinkFileResp { - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8; // creationTime @@ -1779,7 +1779,7 @@ struct SoftUnlinkFileReq { InodeId ownerId; InodeId fileId; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8; // ownerId + fileId + name + creationTime @@ -1801,7 +1801,7 @@ struct SoftUnlinkFileReq { std::ostream& operator<<(std::ostream& out, const SoftUnlinkFileReq& x); struct SoftUnlinkFileResp { - EggsTime deleteCreationTime; + TernTime deleteCreationTime; static constexpr uint16_t STATIC_SIZE = 8; // deleteCreationTime @@ -1871,7 +1871,7 @@ struct SameDirectoryRenameReq { InodeId targetId; InodeId dirId; BincodeBytes oldName; - EggsTime oldCreationTime; + TernTime oldCreationTime; BincodeBytes newName; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8 + BincodeBytes::STATIC_SIZE; // targetId + dirId + oldName + oldCreationTime + newName @@ -1895,7 +1895,7 @@ struct SameDirectoryRenameReq { std::ostream& operator<<(std::ostream& out, const SameDirectoryRenameReq& x); struct SameDirectoryRenameResp { - EggsTime newCreationTime; + TernTime newCreationTime; static constexpr uint16_t STATIC_SIZE = 8; // newCreationTime @@ -2005,7 +2005,7 @@ struct FullReadDirReq { InodeId dirId; uint8_t flags; BincodeBytes startName; - EggsTime startTime; + TernTime startTime; uint16_t limit; uint16_t mtu; @@ -2103,7 +2103,7 @@ struct RemoveNonOwnedEdgeReq { InodeId dirId; InodeId targetId; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8; // dirId + targetId + name + creationTime @@ -2145,7 +2145,7 @@ struct SameShardHardFileUnlinkReq { InodeId ownerId; InodeId targetId; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8; // ownerId + targetId + name + creationTime @@ -2203,7 +2203,7 @@ struct StatTransientFileReq { std::ostream& operator<<(std::ostream& out, const StatTransientFileReq& x); struct StatTransientFileResp { - EggsTime mtime; + TernTime mtime; uint64_t size; BincodeBytes note; @@ -2895,7 +2895,7 @@ struct SameDirectoryRenameSnapshotReq { InodeId targetId; InodeId dirId; BincodeBytes oldName; - EggsTime oldCreationTime; + TernTime oldCreationTime; BincodeBytes newName; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8 + BincodeBytes::STATIC_SIZE; // targetId + dirId + oldName + oldCreationTime + newName @@ -2919,7 +2919,7 @@ struct SameDirectoryRenameSnapshotReq { std::ostream& operator<<(std::ostream& out, const SameDirectoryRenameSnapshotReq& x); struct SameDirectoryRenameSnapshotResp { - EggsTime newCreationTime; + TernTime newCreationTime; static constexpr uint16_t STATIC_SIZE = 8; // newCreationTime @@ -3001,7 +3001,7 @@ struct CreateDirectoryInodeReq { std::ostream& operator<<(std::ostream& out, const CreateDirectoryInodeReq& x); struct CreateDirectoryInodeResp { - EggsTime mtime; + TernTime mtime; static constexpr uint16_t STATIC_SIZE = 8; // mtime @@ -3099,7 +3099,7 @@ struct CreateLockedCurrentEdgeReq { InodeId dirId; BincodeBytes name; InodeId targetId; - EggsTime oldCreationTime; + TernTime oldCreationTime; static constexpr uint16_t STATIC_SIZE = 8 + BincodeBytes::STATIC_SIZE + 8 + 8; // dirId + name + targetId + oldCreationTime @@ -3121,7 +3121,7 @@ struct CreateLockedCurrentEdgeReq { std::ostream& operator<<(std::ostream& out, const CreateLockedCurrentEdgeReq& x); struct CreateLockedCurrentEdgeResp { - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8; // creationTime @@ -3142,7 +3142,7 @@ std::ostream& operator<<(std::ostream& out, const CreateLockedCurrentEdgeResp& x struct LockCurrentEdgeReq { InodeId dirId; InodeId targetId; - EggsTime creationTime; + TernTime creationTime; BincodeBytes name; static constexpr uint16_t STATIC_SIZE = 8 + 8 + 8 + BincodeBytes::STATIC_SIZE; // dirId + targetId + creationTime + name @@ -3184,7 +3184,7 @@ std::ostream& operator<<(std::ostream& out, const LockCurrentEdgeResp& x); struct UnlockCurrentEdgeReq { InodeId dirId; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; InodeId targetId; bool wasMoved; @@ -3229,7 +3229,7 @@ struct RemoveOwnedSnapshotFileEdgeReq { InodeId ownerId; InodeId targetId; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8; // ownerId + targetId + name + creationTime @@ -3328,7 +3328,7 @@ std::ostream& operator<<(std::ostream& out, const MakeDirectoryReq& x); struct MakeDirectoryResp { InodeId id; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8 + 8; // id + creationTime @@ -3351,7 +3351,7 @@ struct RenameFileReq { InodeId targetId; InodeId oldOwnerId; BincodeBytes oldName; - EggsTime oldCreationTime; + TernTime oldCreationTime; InodeId newOwnerId; BincodeBytes newName; @@ -3377,7 +3377,7 @@ struct RenameFileReq { std::ostream& operator<<(std::ostream& out, const RenameFileReq& x); struct RenameFileResp { - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8; // creationTime @@ -3398,7 +3398,7 @@ std::ostream& operator<<(std::ostream& out, const RenameFileResp& x); struct SoftUnlinkDirectoryReq { InodeId ownerId; InodeId targetId; - EggsTime creationTime; + TernTime creationTime; BincodeBytes name; static constexpr uint16_t STATIC_SIZE = 8 + 8 + 8 + BincodeBytes::STATIC_SIZE; // ownerId + targetId + creationTime + name @@ -3441,7 +3441,7 @@ struct RenameDirectoryReq { InodeId targetId; InodeId oldOwnerId; BincodeBytes oldName; - EggsTime oldCreationTime; + TernTime oldCreationTime; InodeId newOwnerId; BincodeBytes newName; @@ -3467,7 +3467,7 @@ struct RenameDirectoryReq { std::ostream& operator<<(std::ostream& out, const RenameDirectoryReq& x); struct RenameDirectoryResp { - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8; // creationTime @@ -3525,7 +3525,7 @@ struct CrossShardHardUnlinkFileReq { InodeId ownerId; InodeId targetId; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8; // ownerId + targetId + name + creationTime @@ -3654,7 +3654,7 @@ std::ostream& operator<<(std::ostream& out, const LocalCdcReq& x); struct LocalCdcResp { AddrsInfo addrs; - EggsTime lastSeen; + TernTime lastSeen; static constexpr uint16_t STATIC_SIZE = AddrsInfo::STATIC_SIZE + 8; // addrs + lastSeen @@ -3754,7 +3754,7 @@ struct ShuckleResp { std::ostream& operator<<(std::ostream& out, const ShuckleResp& x); struct LocalChangedBlockServicesReq { - EggsTime changedSince; + TernTime changedSince; static constexpr uint16_t STATIC_SIZE = 8; // changedSince @@ -3773,7 +3773,7 @@ struct LocalChangedBlockServicesReq { std::ostream& operator<<(std::ostream& out, const LocalChangedBlockServicesReq& x); struct LocalChangedBlockServicesResp { - EggsTime lastChange; + TernTime lastChange; BincodeList blockServices; static constexpr uint16_t STATIC_SIZE = 8 + BincodeList::STATIC_SIZE; // lastChange + blockServices @@ -4067,7 +4067,7 @@ std::ostream& operator<<(std::ostream& out, const RegisterBlockServicesResp& x); struct ChangedBlockServicesAtLocationReq { uint8_t locationId; - EggsTime changedSince; + TernTime changedSince; static constexpr uint16_t STATIC_SIZE = 1 + 8; // locationId + changedSince @@ -4087,7 +4087,7 @@ struct ChangedBlockServicesAtLocationReq { std::ostream& operator<<(std::ostream& out, const ChangedBlockServicesAtLocationReq& x); struct ChangedBlockServicesAtLocationResp { - EggsTime lastChange; + TernTime lastChange; BincodeList blockServices; static constexpr uint16_t STATIC_SIZE = 8 + BincodeList::STATIC_SIZE; // lastChange + blockServices @@ -4166,7 +4166,7 @@ std::ostream& operator<<(std::ostream& out, const CdcAtLocationReq& x); struct CdcAtLocationResp { AddrsInfo addrs; - EggsTime lastSeen; + TernTime lastSeen; static constexpr uint16_t STATIC_SIZE = AddrsInfo::STATIC_SIZE + 8; // addrs + lastSeen @@ -4943,7 +4943,7 @@ struct LogWriteReq { std::ostream& operator<<(std::ostream& out, const LogWriteReq& x); struct LogWriteResp { - EggsError result; + TernError result; static constexpr uint16_t STATIC_SIZE = 2; // result @@ -4983,7 +4983,7 @@ struct ReleaseReq { std::ostream& operator<<(std::ostream& out, const ReleaseReq& x); struct ReleaseResp { - EggsError result; + TernError result; static constexpr uint16_t STATIC_SIZE = 2; // result @@ -5021,7 +5021,7 @@ struct LogReadReq { std::ostream& operator<<(std::ostream& out, const LogReadReq& x); struct LogReadResp { - EggsError result; + TernError result; BincodeList value; static constexpr uint16_t STATIC_SIZE = 2 + BincodeList::STATIC_SIZE; // result + value @@ -5061,7 +5061,7 @@ struct NewLeaderReq { std::ostream& operator<<(std::ostream& out, const NewLeaderReq& x); struct NewLeaderResp { - EggsError result; + TernError result; LogIdx lastReleased; static constexpr uint16_t STATIC_SIZE = 2 + 8; // result + lastReleased @@ -5103,7 +5103,7 @@ struct NewLeaderConfirmReq { std::ostream& operator<<(std::ostream& out, const NewLeaderConfirmReq& x); struct NewLeaderConfirmResp { - EggsError result; + TernError result; static constexpr uint16_t STATIC_SIZE = 2; // result @@ -5143,7 +5143,7 @@ struct LogRecoveryReadReq { std::ostream& operator<<(std::ostream& out, const LogRecoveryReadReq& x); struct LogRecoveryReadResp { - EggsError result; + TernError result; BincodeList value; static constexpr uint16_t STATIC_SIZE = 2 + BincodeList::STATIC_SIZE; // result + value @@ -5187,7 +5187,7 @@ struct LogRecoveryWriteReq { std::ostream& operator<<(std::ostream& out, const LogRecoveryWriteReq& x); struct LogRecoveryWriteResp { - EggsError result; + TernError result; static constexpr uint16_t STATIC_SIZE = 2; // result @@ -5321,9 +5321,9 @@ std::ostream& operator<<(std::ostream& out, const ShardReqContainer& x); struct ShardRespContainer { private: - static constexpr std::array _staticSizes = {sizeof(EggsError), LookupResp::STATIC_SIZE, StatFileResp::STATIC_SIZE, StatDirectoryResp::STATIC_SIZE, ReadDirResp::STATIC_SIZE, ConstructFileResp::STATIC_SIZE, AddSpanInitiateResp::STATIC_SIZE, AddSpanCertifyResp::STATIC_SIZE, LinkFileResp::STATIC_SIZE, SoftUnlinkFileResp::STATIC_SIZE, LocalFileSpansResp::STATIC_SIZE, SameDirectoryRenameResp::STATIC_SIZE, AddInlineSpanResp::STATIC_SIZE, SetTimeResp::STATIC_SIZE, FullReadDirResp::STATIC_SIZE, MoveSpanResp::STATIC_SIZE, RemoveNonOwnedEdgeResp::STATIC_SIZE, SameShardHardFileUnlinkResp::STATIC_SIZE, StatTransientFileResp::STATIC_SIZE, ShardSnapshotResp::STATIC_SIZE, FileSpansResp::STATIC_SIZE, AddSpanLocationResp::STATIC_SIZE, ScrapTransientFileResp::STATIC_SIZE, SetDirectoryInfoResp::STATIC_SIZE, VisitDirectoriesResp::STATIC_SIZE, VisitFilesResp::STATIC_SIZE, VisitTransientFilesResp::STATIC_SIZE, RemoveSpanInitiateResp::STATIC_SIZE, RemoveSpanCertifyResp::STATIC_SIZE, SwapBlocksResp::STATIC_SIZE, BlockServiceFilesResp::STATIC_SIZE, RemoveInodeResp::STATIC_SIZE, AddSpanInitiateWithReferenceResp::STATIC_SIZE, RemoveZeroBlockServiceFilesResp::STATIC_SIZE, SwapSpansResp::STATIC_SIZE, SameDirectoryRenameSnapshotResp::STATIC_SIZE, AddSpanAtLocationInitiateResp::STATIC_SIZE, CreateDirectoryInodeResp::STATIC_SIZE, SetDirectoryOwnerResp::STATIC_SIZE, RemoveDirectoryOwnerResp::STATIC_SIZE, CreateLockedCurrentEdgeResp::STATIC_SIZE, LockCurrentEdgeResp::STATIC_SIZE, UnlockCurrentEdgeResp::STATIC_SIZE, RemoveOwnedSnapshotFileEdgeResp::STATIC_SIZE, MakeFileTransientResp::STATIC_SIZE}; + static constexpr std::array _staticSizes = {sizeof(TernError), LookupResp::STATIC_SIZE, StatFileResp::STATIC_SIZE, StatDirectoryResp::STATIC_SIZE, ReadDirResp::STATIC_SIZE, ConstructFileResp::STATIC_SIZE, AddSpanInitiateResp::STATIC_SIZE, AddSpanCertifyResp::STATIC_SIZE, LinkFileResp::STATIC_SIZE, SoftUnlinkFileResp::STATIC_SIZE, LocalFileSpansResp::STATIC_SIZE, SameDirectoryRenameResp::STATIC_SIZE, AddInlineSpanResp::STATIC_SIZE, SetTimeResp::STATIC_SIZE, FullReadDirResp::STATIC_SIZE, MoveSpanResp::STATIC_SIZE, RemoveNonOwnedEdgeResp::STATIC_SIZE, SameShardHardFileUnlinkResp::STATIC_SIZE, StatTransientFileResp::STATIC_SIZE, ShardSnapshotResp::STATIC_SIZE, FileSpansResp::STATIC_SIZE, AddSpanLocationResp::STATIC_SIZE, ScrapTransientFileResp::STATIC_SIZE, SetDirectoryInfoResp::STATIC_SIZE, VisitDirectoriesResp::STATIC_SIZE, VisitFilesResp::STATIC_SIZE, VisitTransientFilesResp::STATIC_SIZE, RemoveSpanInitiateResp::STATIC_SIZE, RemoveSpanCertifyResp::STATIC_SIZE, SwapBlocksResp::STATIC_SIZE, BlockServiceFilesResp::STATIC_SIZE, RemoveInodeResp::STATIC_SIZE, AddSpanInitiateWithReferenceResp::STATIC_SIZE, RemoveZeroBlockServiceFilesResp::STATIC_SIZE, SwapSpansResp::STATIC_SIZE, SameDirectoryRenameSnapshotResp::STATIC_SIZE, AddSpanAtLocationInitiateResp::STATIC_SIZE, CreateDirectoryInodeResp::STATIC_SIZE, SetDirectoryOwnerResp::STATIC_SIZE, RemoveDirectoryOwnerResp::STATIC_SIZE, CreateLockedCurrentEdgeResp::STATIC_SIZE, LockCurrentEdgeResp::STATIC_SIZE, UnlockCurrentEdgeResp::STATIC_SIZE, RemoveOwnedSnapshotFileEdgeResp::STATIC_SIZE, MakeFileTransientResp::STATIC_SIZE}; ShardMessageKind _kind = ShardMessageKind::EMPTY; - std::variant _data; + std::variant _data; public: ShardRespContainer(); ShardRespContainer(const ShardRespContainer& other); @@ -5333,8 +5333,8 @@ public: ShardMessageKind kind() const { return _kind; } - const EggsError& getError() const; - EggsError& setError(); + const TernError& getError() const; + TernError& setError(); const LookupResp& getLookup() const; LookupResp& setLookup(); const StatFileResp& getStatFile() const; @@ -5477,9 +5477,9 @@ std::ostream& operator<<(std::ostream& out, const CDCReqContainer& x); struct CDCRespContainer { private: - static constexpr std::array _staticSizes = {sizeof(EggsError), MakeDirectoryResp::STATIC_SIZE, RenameFileResp::STATIC_SIZE, SoftUnlinkDirectoryResp::STATIC_SIZE, RenameDirectoryResp::STATIC_SIZE, HardUnlinkDirectoryResp::STATIC_SIZE, CrossShardHardUnlinkFileResp::STATIC_SIZE, CdcSnapshotResp::STATIC_SIZE}; + static constexpr std::array _staticSizes = {sizeof(TernError), MakeDirectoryResp::STATIC_SIZE, RenameFileResp::STATIC_SIZE, SoftUnlinkDirectoryResp::STATIC_SIZE, RenameDirectoryResp::STATIC_SIZE, HardUnlinkDirectoryResp::STATIC_SIZE, CrossShardHardUnlinkFileResp::STATIC_SIZE, CdcSnapshotResp::STATIC_SIZE}; CDCMessageKind _kind = CDCMessageKind::EMPTY; - std::variant _data; + std::variant _data; public: CDCRespContainer(); CDCRespContainer(const CDCRespContainer& other); @@ -5489,8 +5489,8 @@ public: CDCMessageKind kind() const { return _kind; } - const EggsError& getError() const; - EggsError& setError(); + const TernError& getError() const; + TernError& setError(); const MakeDirectoryResp& getMakeDirectory() const; MakeDirectoryResp& setMakeDirectory(); const RenameFileResp& getRenameFile() const; @@ -5601,9 +5601,9 @@ std::ostream& operator<<(std::ostream& out, const ShuckleReqContainer& x); struct ShuckleRespContainer { private: - static constexpr std::array _staticSizes = {sizeof(EggsError), LocalShardsResp::STATIC_SIZE, LocalCdcResp::STATIC_SIZE, InfoResp::STATIC_SIZE, ShuckleResp::STATIC_SIZE, LocalChangedBlockServicesResp::STATIC_SIZE, CreateLocationResp::STATIC_SIZE, RenameLocationResp::STATIC_SIZE, LocationsResp::STATIC_SIZE, RegisterShardResp::STATIC_SIZE, RegisterCdcResp::STATIC_SIZE, SetBlockServiceFlagsResp::STATIC_SIZE, RegisterBlockServicesResp::STATIC_SIZE, ChangedBlockServicesAtLocationResp::STATIC_SIZE, ShardsAtLocationResp::STATIC_SIZE, CdcAtLocationResp::STATIC_SIZE, ShardBlockServicesDEPRECATEDResp::STATIC_SIZE, CdcReplicasDEPRECATEDResp::STATIC_SIZE, AllShardsResp::STATIC_SIZE, DecommissionBlockServiceResp::STATIC_SIZE, MoveShardLeaderResp::STATIC_SIZE, ClearShardInfoResp::STATIC_SIZE, ShardBlockServicesResp::STATIC_SIZE, AllCdcResp::STATIC_SIZE, EraseDecommissionedBlockResp::STATIC_SIZE, AllBlockServicesDeprecatedResp::STATIC_SIZE, MoveCdcLeaderResp::STATIC_SIZE, ClearCdcInfoResp::STATIC_SIZE, UpdateBlockServicePathResp::STATIC_SIZE}; + static constexpr std::array _staticSizes = {sizeof(TernError), LocalShardsResp::STATIC_SIZE, LocalCdcResp::STATIC_SIZE, InfoResp::STATIC_SIZE, ShuckleResp::STATIC_SIZE, LocalChangedBlockServicesResp::STATIC_SIZE, CreateLocationResp::STATIC_SIZE, RenameLocationResp::STATIC_SIZE, LocationsResp::STATIC_SIZE, RegisterShardResp::STATIC_SIZE, RegisterCdcResp::STATIC_SIZE, SetBlockServiceFlagsResp::STATIC_SIZE, RegisterBlockServicesResp::STATIC_SIZE, ChangedBlockServicesAtLocationResp::STATIC_SIZE, ShardsAtLocationResp::STATIC_SIZE, CdcAtLocationResp::STATIC_SIZE, ShardBlockServicesDEPRECATEDResp::STATIC_SIZE, CdcReplicasDEPRECATEDResp::STATIC_SIZE, AllShardsResp::STATIC_SIZE, DecommissionBlockServiceResp::STATIC_SIZE, MoveShardLeaderResp::STATIC_SIZE, ClearShardInfoResp::STATIC_SIZE, ShardBlockServicesResp::STATIC_SIZE, AllCdcResp::STATIC_SIZE, EraseDecommissionedBlockResp::STATIC_SIZE, AllBlockServicesDeprecatedResp::STATIC_SIZE, MoveCdcLeaderResp::STATIC_SIZE, ClearCdcInfoResp::STATIC_SIZE, UpdateBlockServicePathResp::STATIC_SIZE}; ShuckleMessageKind _kind = ShuckleMessageKind::EMPTY; - std::variant _data; + std::variant _data; public: ShuckleRespContainer(); ShuckleRespContainer(const ShuckleRespContainer& other); @@ -5613,8 +5613,8 @@ public: ShuckleMessageKind kind() const { return _kind; } - const EggsError& getError() const; - EggsError& setError(); + const TernError& getError() const; + TernError& setError(); const LocalShardsResp& getLocalShards() const; LocalShardsResp& setLocalShards(); const LocalCdcResp& getLocalCdc() const; @@ -5725,9 +5725,9 @@ std::ostream& operator<<(std::ostream& out, const LogReqContainer& x); struct LogRespContainer { private: - static constexpr std::array _staticSizes = {sizeof(EggsError), LogWriteResp::STATIC_SIZE, ReleaseResp::STATIC_SIZE, LogReadResp::STATIC_SIZE, NewLeaderResp::STATIC_SIZE, NewLeaderConfirmResp::STATIC_SIZE, LogRecoveryReadResp::STATIC_SIZE, LogRecoveryWriteResp::STATIC_SIZE}; + static constexpr std::array _staticSizes = {sizeof(TernError), LogWriteResp::STATIC_SIZE, ReleaseResp::STATIC_SIZE, LogReadResp::STATIC_SIZE, NewLeaderResp::STATIC_SIZE, NewLeaderConfirmResp::STATIC_SIZE, LogRecoveryReadResp::STATIC_SIZE, LogRecoveryWriteResp::STATIC_SIZE}; LogMessageKind _kind = LogMessageKind::EMPTY; - std::variant _data; + std::variant _data; public: LogRespContainer(); LogRespContainer(const LogRespContainer& other); @@ -5737,8 +5737,8 @@ public: LogMessageKind kind() const { return _kind; } - const EggsError& getError() const; - EggsError& setError(); + const TernError& getError() const; + TernError& setError(); const LogWriteResp& getLogWrite() const; LogWriteResp& setLogWrite(); const ReleaseResp& getRelease() const; @@ -5804,7 +5804,7 @@ std::ostream& operator<<(std::ostream& out, ShardLogEntryKind err); struct ConstructFileEntry { uint8_t type; - EggsTime deadlineTime; + TernTime deadlineTime; BincodeBytes note; static constexpr uint16_t STATIC_SIZE = 1 + 8 + BincodeBytes::STATIC_SIZE; // type + deadlineTime + note @@ -5852,7 +5852,7 @@ struct SameDirectoryRenameEntry { InodeId dirId; InodeId targetId; BincodeBytes oldName; - EggsTime oldCreationTime; + TernTime oldCreationTime; BincodeBytes newName; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8 + BincodeBytes::STATIC_SIZE; // dirId + targetId + oldName + oldCreationTime + newName @@ -5879,7 +5879,7 @@ struct SoftUnlinkFileEntry { InodeId ownerId; InodeId fileId; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8; // ownerId + fileId + name + creationTime @@ -5927,7 +5927,7 @@ struct CreateLockedCurrentEdgeEntry { InodeId dirId; BincodeBytes name; InodeId targetId; - EggsTime oldCreationTime; + TernTime oldCreationTime; static constexpr uint16_t STATIC_SIZE = 8 + BincodeBytes::STATIC_SIZE + 8 + 8; // dirId + name + targetId + oldCreationTime @@ -5951,7 +5951,7 @@ std::ostream& operator<<(std::ostream& out, const CreateLockedCurrentEdgeEntry& struct UnlockCurrentEdgeEntry { InodeId dirId; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; InodeId targetId; bool wasMoved; @@ -5978,7 +5978,7 @@ std::ostream& operator<<(std::ostream& out, const UnlockCurrentEdgeEntry& x); struct LockCurrentEdgeEntry { InodeId dirId; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; InodeId targetId; static constexpr uint16_t STATIC_SIZE = 8 + BincodeBytes::STATIC_SIZE + 8 + 8; // dirId + name + creationTime + targetId @@ -6086,7 +6086,7 @@ struct RemoveNonOwnedEdgeEntry { InodeId dirId; InodeId targetId; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8; // dirId + targetId + name + creationTime @@ -6109,7 +6109,7 @@ std::ostream& operator<<(std::ostream& out, const RemoveNonOwnedEdgeEntry& x); struct ScrapTransientFileEntry { InodeId id; - EggsTime deadlineTime; + TernTime deadlineTime; static constexpr uint16_t STATIC_SIZE = 8 + 8; // id + deadlineTime @@ -6286,7 +6286,7 @@ struct RemoveOwnedSnapshotFileEdgeEntry { InodeId ownerId; InodeId targetId; BincodeBytes name; - EggsTime creationTime; + TernTime creationTime; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8; // ownerId + targetId + name + creationTime @@ -6444,7 +6444,7 @@ struct SameDirectoryRenameSnapshotEntry { InodeId dirId; InodeId targetId; BincodeBytes oldName; - EggsTime oldCreationTime; + TernTime oldCreationTime; BincodeBytes newName; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8 + BincodeBytes::STATIC_SIZE; // dirId + targetId + oldName + oldCreationTime + newName @@ -6539,8 +6539,8 @@ struct SameShardHardFileUnlinkEntry { InodeId ownerId; InodeId targetId; BincodeBytes name; - EggsTime creationTime; - EggsTime deadlineTime; + TernTime creationTime; + TernTime deadlineTime; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE + 8 + 8; // ownerId + targetId + name + creationTime + deadlineTime @@ -6564,7 +6564,7 @@ std::ostream& operator<<(std::ostream& out, const SameShardHardFileUnlinkEntry& struct MakeFileTransientEntry { InodeId id; - EggsTime deadlineTime; + TernTime deadlineTime; BincodeBytes note; static constexpr uint16_t STATIC_SIZE = 8 + 8 + BincodeBytes::STATIC_SIZE; // id + deadlineTime + note diff --git a/cpp/core/PeriodicLoop.hpp b/cpp/core/PeriodicLoop.hpp index 0c4c45a1..f14fa8cf 100644 --- a/cpp/core/PeriodicLoop.hpp +++ b/cpp/core/PeriodicLoop.hpp @@ -28,7 +28,7 @@ public: PeriodicLoop(Logger& logger, std::shared_ptr& xmon, const std::string& name, const PeriodicLoopConfig& config) : Loop(logger, xmon, name), _config(config), - _rand(eggsNow().ns), + _rand(ternNow().ns), _lastSucceded(false) {} @@ -37,7 +37,7 @@ public: // We sleep first to immediately introduce a jitter. virtual void step() override { - auto t = eggsNow(); + auto t = ternNow(); Duration pause; if (_lastSucceded) { pause = _config.successInterval + Duration((double)_config.successInterval.ns * (_config.successIntervalJitter * wyhash64_double(&_rand))); diff --git a/cpp/core/SharedRocksDB.cpp b/cpp/core/SharedRocksDB.cpp index 5d89ca3a..656c959f 100644 --- a/cpp/core/SharedRocksDB.cpp +++ b/cpp/core/SharedRocksDB.cpp @@ -196,35 +196,35 @@ void SharedRocksDB::dumpRocksDBStatistics() { namespace fs = std::filesystem; -EggsError SharedRocksDB::snapshot(const std::string& path) { +TernError SharedRocksDB::snapshot(const std::string& path) { std::shared_lock _(_stateMutex); ALWAYS_ASSERT(_db.get() != nullptr); LOG_INFO(_env, "Creating snapshot in %s", path); std::error_code ec; if (fs::is_directory(path, ec)) { LOG_INFO(_env, "Snapshot exists in %s", path); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } if (fs::exists(path, ec)) { LOG_ERROR(_env, "Provided path exists and is not an existing snapshot %s", path); - return EggsError::CANNOT_CREATE_DB_SNAPSHOT; + return TernError::CANNOT_CREATE_DB_SNAPSHOT; } std::string tmpPath; { fs::path p{path}; if (!p.has_parent_path()) { LOG_ERROR(_env, "Path %s does not have parent", path); - return EggsError::CANNOT_CREATE_DB_SNAPSHOT; + return TernError::CANNOT_CREATE_DB_SNAPSHOT; } p = p.parent_path(); if (!fs::is_directory(p, ec)) { LOG_ERROR(_env, "Parent path of %s is not a directory", path); - return EggsError::CANNOT_CREATE_DB_SNAPSHOT; + return TernError::CANNOT_CREATE_DB_SNAPSHOT; } - p /= "tmp-snapshot-" + std::to_string(eggsNow().ns); + p /= "tmp-snapshot-" + std::to_string(ternNow().ns); if (fs::exists(p, ec)) { LOG_ERROR(_env, "Tmp path exists %s", p.generic_string()); - return EggsError::CANNOT_CREATE_DB_SNAPSHOT; + return TernError::CANNOT_CREATE_DB_SNAPSHOT; } tmpPath = p.generic_string(); } @@ -233,7 +233,7 @@ EggsError SharedRocksDB::snapshot(const std::string& path) { auto status = rocksdb::Checkpoint::Create(_db.get(), (rocksdb::Checkpoint**)(&checkpoint)); if (!status.ok()) { LOG_ERROR(_env, "Failed creating checkpint (%s)", status.ToString()); - return EggsError::CANNOT_CREATE_DB_SNAPSHOT; + return TernError::CANNOT_CREATE_DB_SNAPSHOT; } status = checkpoint->CreateCheckpoint(tmpPath); @@ -241,14 +241,14 @@ EggsError SharedRocksDB::snapshot(const std::string& path) { LOG_ERROR(_env, "Failed storing checkpint (%s)", status.ToString()); // try to cleanup tmpPath fs::remove_all(tmpPath, ec); - return EggsError::CANNOT_CREATE_DB_SNAPSHOT; + return TernError::CANNOT_CREATE_DB_SNAPSHOT; } fs::rename(tmpPath, path, ec); if (ec) { LOG_ERROR(_env, "Failed moving temp dir to requested path error (%s)", ec.message()); // try to cleanup tmpPath fs::remove_all(tmpPath, ec); - return EggsError::CANNOT_CREATE_DB_SNAPSHOT; + return TernError::CANNOT_CREATE_DB_SNAPSHOT; } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } diff --git a/cpp/core/SharedRocksDB.hpp b/cpp/core/SharedRocksDB.hpp index 72e6d25e..82d67960 100644 --- a/cpp/core/SharedRocksDB.hpp +++ b/cpp/core/SharedRocksDB.hpp @@ -35,7 +35,7 @@ public: void rocksDBMetrics(std::unordered_map& stats); void dumpRocksDBStatistics(); - EggsError snapshot(const std::string& path); + TernError snapshot(const std::string& path); private: Env _env; diff --git a/cpp/core/Shuckle.cpp b/cpp/core/Shuckle.cpp index 8e966bbe..1b30f02b 100644 --- a/cpp/core/Shuckle.cpp +++ b/cpp/core/Shuckle.cpp @@ -321,7 +321,7 @@ std::pair fetchCDCReplicas( } if (respContainer.getCdcReplicasDEPRECATED().replicas.els.size() != replicas.size()) { - throw EGGS_EXCEPTION("expecting %s replicas, got %s", replicas.size(), respContainer.getCdcReplicasDEPRECATED().replicas.els.size()); + throw TERN_EXCEPTION("expecting %s replicas, got %s", replicas.size(), respContainer.getCdcReplicasDEPRECATED().replicas.els.size()); } for (int i = 0; i < replicas.size(); i++) { replicas[i] = respContainer.getCdcReplicasDEPRECATED().replicas.els[i]; @@ -349,7 +349,7 @@ std::pair fetchLocalShards(const std::string& host, uint16_t p if (err) { return {err, errStr}; } } if (respContainer.getLocalShards().shards.els.size() != shards.size()) { - throw EGGS_EXCEPTION("expecting %s shards, got %s", shards.size(), respContainer.getLocalShards().shards.els.size()); + throw TERN_EXCEPTION("expecting %s shards, got %s", shards.size(), respContainer.getLocalShards().shards.els.size()); } for (int i = 0; i < shards.size(); i++) { shards[i] = respContainer.getLocalShards().shards.els[i]; diff --git a/cpp/core/Shuckle.hpp b/cpp/core/Shuckle.hpp index 5e3258f3..1229a98f 100644 --- a/cpp/core/Shuckle.hpp +++ b/cpp/core/Shuckle.hpp @@ -65,5 +65,4 @@ std::pair fetchLocalShards( std::array& shards ); -const std::string defaultShuckleAddress = "REDACTED"; bool parseShuckleAddress(const std::string& fullShuckleAddress, std::string& shuckleHost, uint16_t& shucklePort); diff --git a/cpp/core/Time.cpp b/cpp/core/Time.cpp index 2f94fd46..d4463071 100644 --- a/cpp/core/Time.cpp +++ b/cpp/core/Time.cpp @@ -58,17 +58,17 @@ static void checkClockRes() { throw SYSCALL_EXCEPTION("clock_getres"); } if (ts.tv_sec != 0 || ts.tv_nsec != 1) { - throw EGGS_EXCEPTION("expected nanosecond precisions, got %s,%s", ts.tv_sec, ts.tv_nsec); + throw TERN_EXCEPTION("expected nanosecond precisions, got %s,%s", ts.tv_sec, ts.tv_nsec); } } -static std::atomic _currentTimeInTest = EggsTime(0); +static std::atomic _currentTimeInTest = TernTime(0); -void _setCurrentTime(EggsTime time) { +void _setCurrentTime(TernTime time) { _currentTimeInTest.store(time, std::memory_order_relaxed); } -EggsTime eggsNow() { +TernTime ternNow() { auto timeInTest = _currentTimeInTest.load(std::memory_order_relaxed); if (unlikely( timeInTest != 0)) { return timeInTest; @@ -79,12 +79,12 @@ EggsTime eggsNow() { throw SYSCALL_EXCEPTION("clock_gettime"); } - return EggsTime(((uint64_t)now.tv_nsec + ((uint64_t)now.tv_sec * 1'000'000'000ull))); + return TernTime(((uint64_t)now.tv_nsec + ((uint64_t)now.tv_sec * 1'000'000'000ull))); } -std::ostream& operator<<(std::ostream& out, EggsTime eggst) { - time_t secs = eggst.ns / 1'000'000'000ull; - uint64_t nsecs = eggst.ns % 1'000'000'000ull; +std::ostream& operator<<(std::ostream& out, TernTime ternt) { + time_t secs = ternt.ns / 1'000'000'000ull; + uint64_t nsecs = ternt.ns % 1'000'000'000ull; struct tm tm; if (gmtime_r(&secs, &tm) == nullptr) { throw SYSCALL_EXCEPTION("gmtime_r"); diff --git a/cpp/core/Time.hpp b/cpp/core/Time.hpp index abbcdc91..73d902e1 100644 --- a/cpp/core/Time.hpp +++ b/cpp/core/Time.hpp @@ -67,29 +67,29 @@ constexpr Duration operator "" _hours(unsigned long long t) { return Duration(t* std::ostream& operator<<(std::ostream& out, Duration d); -struct EggsTime { +struct TernTime { uint64_t ns; - EggsTime(): ns(0) {} - EggsTime(uint64_t ns_): ns(ns_) {} + TernTime(): ns(0) {} + TernTime(uint64_t ns_): ns(ns_) {} - bool operator==(EggsTime rhs) const { + bool operator==(TernTime rhs) const { return ns == rhs.ns; } - bool operator>(EggsTime rhs) const { + bool operator>(TernTime rhs) const { return ns > rhs.ns; } - bool operator>=(EggsTime rhs) const { + bool operator>=(TernTime rhs) const { return ns >= rhs.ns; } - bool operator<=(EggsTime rhs) const { + bool operator<=(TernTime rhs) const { return ns <= rhs.ns; } - bool operator<(EggsTime rhs) const { + bool operator<(TernTime rhs) const { return ns < rhs.ns; } @@ -101,18 +101,18 @@ struct EggsTime { ns = buf.unpackScalar(); } - EggsTime operator+(Duration d) const { - return EggsTime(ns + d.ns); + TernTime operator+(Duration d) const { + return TernTime(ns + d.ns); } - EggsTime operator-(Duration d) const { + TernTime operator-(Duration d) const { if (unlikely(d.ns > ns)) { return 0; } - return EggsTime(ns - d.ns); + return TernTime(ns - d.ns); } - Duration operator-(EggsTime t) const { + Duration operator-(TernTime t) const { if (unlikely(t.ns > ns)) { return 0; } @@ -124,14 +124,14 @@ struct EggsTime { #ifdef __clang__ __attribute__((no_sanitize("integer"))) #endif - Duration operator-(EggsTime d) { + Duration operator-(TernTime d) { return Duration(ns - d.ns); } }; -std::ostream& operator<<(std::ostream& out, EggsTime t); +std::ostream& operator<<(std::ostream& out, TernTime t); // DO NOT USE UNLESS TESTING TIME SENSITIVE BEHAVIOR -void _setCurrentTime(EggsTime time); +void _setCurrentTime(TernTime time); -EggsTime eggsNow(); +TernTime ternNow(); diff --git a/cpp/core/Timings.cpp b/cpp/core/Timings.cpp index 6888ed73..e0cf6467 100644 --- a/cpp/core/Timings.cpp +++ b/cpp/core/Timings.cpp @@ -5,14 +5,14 @@ Timings::Timings(Duration firstUpperBound, double growth, int bins) : _invLogGrowth(1.0/log(growth)), _firstUpperBound(firstUpperBound.ns), _growthDivUpperBound(growth / (double)firstUpperBound.ns), - _startedAt(eggsNow()), + _startedAt(ternNow()), _bins(bins) { if (firstUpperBound < 1) { - throw EGGS_EXCEPTION("non-positive first upper bound %s", firstUpperBound); + throw TERN_EXCEPTION("non-positive first upper bound %s", firstUpperBound); } if (growth <= 1) { - throw EGGS_EXCEPTION("growth %s <= 1.0", growth); + throw TERN_EXCEPTION("growth %s <= 1.0", growth); } for (auto& bin: _bins) { bin.store(0); @@ -20,7 +20,7 @@ Timings::Timings(Duration firstUpperBound, double growth, int bins) : } void Timings::reset() { - _startedAt = eggsNow(); + _startedAt = ternNow(); for (auto& bin : _bins) { bin.store(0); } diff --git a/cpp/core/Timings.hpp b/cpp/core/Timings.hpp index 9bc7048d..bc17d27c 100644 --- a/cpp/core/Timings.hpp +++ b/cpp/core/Timings.hpp @@ -17,7 +17,7 @@ struct Timings { double _growthDivUpperBound; // actual data - EggsTime _startedAt; + TernTime _startedAt; std::vector> _bins; public: Timings(Duration firstUpperBound, double growth, int bins); diff --git a/cpp/core/UDPSocketPair.hpp b/cpp/core/UDPSocketPair.hpp index 5df1a09f..f4389aa7 100644 --- a/cpp/core/UDPSocketPair.hpp +++ b/cpp/core/UDPSocketPair.hpp @@ -164,7 +164,7 @@ public: template void prepareOutgoingMessage(Env& env, const AddrsInfo& srcAddr, const AddrsInfo& dstAddr, Fill f) { - auto now = eggsNow(); // randomly pick one of the dest addrs and one of our sockets + auto now = ternNow(); // randomly pick one of the dest addrs and one of our sockets uint8_t srcSockIdx = now.ns & (srcAddr[1].port != 0); uint8_t dstSockIdx = (now.ns>>1) & (dstAddr[1].port != 0); prepareOutgoingMessage(env, srcAddr, srcSockIdx, dstAddr[dstSockIdx], f); diff --git a/cpp/core/Xmon.cpp b/cpp/core/Xmon.cpp index 087eb1c4..92d12049 100644 --- a/cpp/core/Xmon.cpp +++ b/cpp/core/Xmon.cpp @@ -27,7 +27,7 @@ static const char* appTypeString(XmonAppType appType) { case XmonAppType::CRITICAL: return "restech_eggsfs.critical"; default: - throw EGGS_EXCEPTION("Bad xmon app type %s", (int)appType); + throw TERN_EXCEPTION("Bad xmon app type %s", (int)appType); } } @@ -60,7 +60,7 @@ Xmon::Xmon( _xmonPort(5004) { if (_appInstance.empty()) { - throw EGGS_EXCEPTION("empty app name"); + throw TERN_EXCEPTION("empty app name"); } { char buf[HOST_NAME_MAX]; @@ -100,7 +100,7 @@ Xmon::Xmon( // arm initial timer { - auto now = eggsNow(); + auto now = ternNow(); _timerExpiresAt = std::numeric_limits::max(); _ensureTimer(now, now); } @@ -117,7 +117,7 @@ Xmon::~Xmon() { } } -void Xmon::_ensureTimer(EggsTime now, EggsTime t) { +void Xmon::_ensureTimer(TernTime now, TernTime t) { if (_timerExpiresAt <= t) { return; } Duration d = std::max(1, t - now); LOG_DEBUG(_env, "arming timer in %s", d); @@ -268,9 +268,9 @@ void XmonBuf::readIn(int fd, size_t sz, std::string& errString) { constexpr int MAX_BINNABLE_ALERTS = 20; -EggsTime Xmon::_stepNextWakeup() { +TernTime Xmon::_stepNextWakeup() { std::string errString; - EggsTime nextWakeup = std::numeric_limits::max(); + TernTime nextWakeup = std::numeric_limits::max(); #define CHECK_ERR_STRING(__what) \ if (errString.size()) { \ @@ -302,7 +302,7 @@ EggsTime Xmon::_stepNextWakeup() { LOG_INFO(_env, "sent logon to xmon, appType=%s appInstance=%s", appTypeString(_parent), _appInstance); _gotHeartbeatAt = 0; - nextWakeup = std::min(nextWakeup, eggsNow() + HEARTBEAT_INTERVAL*2); + nextWakeup = std::min(nextWakeup, ternNow() + HEARTBEAT_INTERVAL*2); } if (poll(_fds, NUM_FDS, -1) < 0) { @@ -310,7 +310,7 @@ EggsTime Xmon::_stepNextWakeup() { throw SYSCALL_EXCEPTION("poll"); } - auto now = eggsNow(); + auto now = ternNow(); if (_fds[SOCK_FD].revents & (POLLIN|POLLHUP|POLLERR)) { LOG_DEBUG(_env, "got event in sock fd"); @@ -373,7 +373,7 @@ EggsTime Xmon::_stepNextWakeup() { _binnableAlerts.erase(alertId); break; } default: - throw EGGS_EXCEPTION("unknown message type %s", msgType); + throw TERN_EXCEPTION("unknown message type %s", msgType); } } @@ -441,7 +441,7 @@ EggsTime Xmon::_stepNextWakeup() { if (req.quietPeriod > 0) { ALWAYS_ASSERT(!req.binnable, "got alert with quietPeriod=%s, but it is binnable", req.quietPeriod); LOG_INFO(_env, "got non-binnable alertId=%s message=%s quietPeriod=%s, will wait", req.alertId, req.message, req.quietPeriod); - EggsTime quietUntil = now + req.quietPeriod; + TernTime quietUntil = now + req.quietPeriod; nextWakeup = std::min(nextWakeup, quietUntil); _quietAlerts[req.alertId] = QuietAlert{ .quietUntil = quietUntil, @@ -490,7 +490,7 @@ EggsTime Xmon::_stepNextWakeup() { } LOG_INFO(_env, "clearing alert, aid=%s", req.alertId); } else { - throw EGGS_EXCEPTION("bad req type %s", (int)req.msgType); + throw TERN_EXCEPTION("bad req type %s", (int)req.msgType); } _packRequest(_buf, req); write_request: @@ -509,6 +509,6 @@ EggsTime Xmon::_stepNextWakeup() { } void Xmon::step() { - EggsTime nextTimer = _stepNextWakeup(); - _ensureTimer(eggsNow(), nextTimer); + TernTime nextTimer = _stepNextWakeup(); + _ensureTimer(ternNow(), nextTimer); } diff --git a/cpp/core/Xmon.hpp b/cpp/core/Xmon.hpp index e6a3e00e..c9b2e736 100644 --- a/cpp/core/Xmon.hpp +++ b/cpp/core/Xmon.hpp @@ -120,24 +120,24 @@ private: std::unordered_set _binnableAlerts; // quiet alerts we're waiting to send out struct QuietAlert { - EggsTime quietUntil; + TernTime quietUntil; XmonAppType appType; std::string message; }; std::unordered_map _quietAlerts; // last heartbeat from xmon - EggsTime _gotHeartbeatAt; + TernTime _gotHeartbeatAt; // what the timer fd expiration is currently set to - EggsTime _timerExpiresAt; + TernTime _timerExpiresAt; XmonBuf _buf; void _packLogon(XmonBuf& buf); void _packUpdate(XmonBuf& buf); void _packRequest(XmonBuf& buf, const XmonRequest& req); - void _ensureTimer(EggsTime now, EggsTime t); + void _ensureTimer(TernTime now, TernTime t); - EggsTime _stepNextWakeup(); + TernTime _stepNextWakeup(); public: Xmon( Logger& logger, diff --git a/cpp/core/strerror.cpp b/cpp/core/strerror.cpp index c049b54c..1bbea812 100644 --- a/cpp/core/strerror.cpp +++ b/cpp/core/strerror.cpp @@ -10,7 +10,7 @@ thread_local static char strerror_buf[128]; // Testing for _GNU_SOURCE does not work, because the alpine build // has that set, too. -#ifdef EGGS_ALPINE +#ifdef TERN_ALPINE const char* safe_strerror(int errnum) { int res = strerror_r(errnum, strerror_buf, sizeof(strerror_buf)); if (res > 0) { diff --git a/cpp/crc32c/CMakeLists.txt b/cpp/crc32c/CMakeLists.txt index 4c592ed8..0c3f3483 100644 --- a/cpp/crc32c/CMakeLists.txt +++ b/cpp/crc32c/CMakeLists.txt @@ -4,4 +4,4 @@ add_executable(crc32c-tables tables.cpp iscsi.h) add_executable(crc32c-tests tests.cpp) target_link_libraries(crc32c-tests PRIVATE crc32c) -target_include_directories(crc32c-tests PRIVATE ${eggsfs_SOURCE_DIR}/wyhash) \ No newline at end of file +target_include_directories(crc32c-tests PRIVATE ${ternfs_SOURCE_DIR}/wyhash) \ No newline at end of file diff --git a/cpp/crc32c/crc32c.h b/cpp/crc32c/crc32c.h index a31b85fd..4335d130 100644 --- a/cpp/crc32c/crc32c.h +++ b/cpp/crc32c/crc32c.h @@ -5,8 +5,8 @@ // we just invert the crc at the beginning and at the end. // // See . -#ifndef EGGS_CRC32C -#define EGGS_CRC32C +#ifndef TERN_CRC32C +#define TERN_CRC32C #include #include diff --git a/cpp/dbtools/CMakeLists.txt b/cpp/dbtools/CMakeLists.txt index 45b4dc77..fc909632 100644 --- a/cpp/dbtools/CMakeLists.txt +++ b/cpp/dbtools/CMakeLists.txt @@ -1,7 +1,7 @@ -include_directories(${eggsfs_SOURCE_DIR}/core ${eggsfs_SOURCE_DIR}/shard ${eggsfs_SOURCE_DIR}/cdc) +include_directories(${ternfs_SOURCE_DIR}/core ${ternfs_SOURCE_DIR}/shard ${ternfs_SOURCE_DIR}/cdc) add_library(sharddbtools ShardDBTools.hpp ShardDBTools.cpp LogsDBTools.hpp LogsDBTools.cpp CDCDBTools.hpp CDCDBTools.cpp) target_link_libraries(sharddbtools PRIVATE core shard cdc) -add_executable(eggsdbtools eggsdbtools.cpp) -target_link_libraries(eggsdbtools PRIVATE core shard sharddbtools ${EGGSFS_JEMALLOC_LIBS}) +add_executable(terndbtools terndbtools.cpp) +target_link_libraries(terndbtools PRIVATE core shard sharddbtools ${TERNFS_JEMALLOC_LIBS}) diff --git a/cpp/dbtools/ShardDBTools.cpp b/cpp/dbtools/ShardDBTools.cpp index 276a7a5e..a32f57f4 100644 --- a/cpp/dbtools/ShardDBTools.cpp +++ b/cpp/dbtools/ShardDBTools.cpp @@ -229,7 +229,7 @@ void ShardDBTools::fsck(const std::string& dbPath) { auto dummyCurrentDirectory = InodeId::FromU64Unchecked(1ull << 63); auto thisDir = dummyCurrentDirectory; bool thisDirHasCurrentEdges = false; - EggsTime thisDirMaxTime = 0; + TernTime thisDirMaxTime = 0; std::string thisDirMaxTimeEdge; // the last edge for a given name, in a given directory std::unordered_map, StaticValue>> thisDirLastEdges; @@ -276,7 +276,7 @@ void ShardDBTools::fsck(const std::string& dbPath) { ERROR("Edge %s has mismatch between name and nameHash", edgeK()); } InodeId ownedTargetId = NULL_INODE_ID; - EggsTime creationTime; + TernTime creationTime; std::optional> currentEdge; std::optional> snapshotEdge; if (edgeK().current()) { @@ -329,7 +329,7 @@ void ShardDBTools::fsck(const std::string& dbPath) { { auto prevEdge = thisDirLastEdges.find(name); if (prevEdge != thisDirLastEdges.end()) { - EggsTime prevCreationTime = prevEdge->second.first().creationTime(); + TernTime prevCreationTime = prevEdge->second.first().creationTime(); // The edge must be newer than every non-current edge before it, with the exception of deletion edges // (when we override the deletion edge) if ( @@ -504,8 +504,8 @@ struct SizePerStorageClass { }; struct FileInfo { - EggsTime mTime; - EggsTime aTime; + TernTime mTime; + TernTime aTime; SizePerStorageClass size; uint64_t size_weight; }; @@ -604,7 +604,7 @@ void ShardDBTools::sampleFiles(const std::string& dbPath, const std::string& out auto dummyCurrentDirectory = InodeId::FromU64Unchecked(1ull << 63); auto thisDir = dummyCurrentDirectory; bool thisDirHasCurrentEdges = false; - EggsTime thisDirMaxTime = 0; + TernTime thisDirMaxTime = 0; std::string thisDirMaxTimeEdge; std::unique_ptr it(db->NewIterator(options, edgesCf)); for (it->SeekToFirst(); it->Valid(); it->Next()) @@ -615,8 +615,8 @@ void ShardDBTools::sampleFiles(const std::string& dbPath, const std::string& out std::optional> currentEdge; std::optional> snapshotEdge; bool current = false; - EggsTime creationTime = 0; - EggsTime deletionTime = 0; + TernTime creationTime = 0; + TernTime deletionTime = 0; if (edgeK().current()) { currentEdge = ExternalValue::FromSlice(it->value()); ownedTargetId = (*currentEdge)().targetId(); diff --git a/cpp/dbtools/eggsdbtools.cpp b/cpp/dbtools/terndbtools.cpp similarity index 100% rename from cpp/dbtools/eggsdbtools.cpp rename to cpp/dbtools/terndbtools.cpp diff --git a/cpp/ktools/CMakeLists.txt b/cpp/ktools/CMakeLists.txt index b47e3152..083eb27b 100644 --- a/cpp/ktools/CMakeLists.txt +++ b/cpp/ktools/CMakeLists.txt @@ -1 +1 @@ -add_executable(eggsktools eggsktools.cpp) \ No newline at end of file +add_executable(ternktools ternktools.cpp) \ No newline at end of file diff --git a/cpp/ktools/eggsktools.cpp b/cpp/ktools/ternktools.cpp similarity index 100% rename from cpp/ktools/eggsktools.cpp rename to cpp/ktools/ternktools.cpp diff --git a/cpp/rs/CMakeLists.txt b/cpp/rs/CMakeLists.txt index d155de48..475be6b1 100644 --- a/cpp/rs/CMakeLists.txt +++ b/cpp/rs/CMakeLists.txt @@ -2,8 +2,8 @@ add_library(rs rs.h rs.cpp gf_tables.c) add_executable(rs-tests tests.cpp) target_link_libraries(rs-tests PRIVATE rs) -target_include_directories(rs-tests PRIVATE ${eggsfs_SOURCE_DIR}/wyhash) +target_include_directories(rs-tests PRIVATE ${ternfs_SOURCE_DIR}/wyhash) add_executable(rs-bench bench.cpp) target_link_libraries(rs-bench PRIVATE rs) -target_include_directories(rs-bench PRIVATE ${eggsfs_SOURCE_DIR}/wyhash) \ No newline at end of file +target_include_directories(rs-bench PRIVATE ${ternfs_SOURCE_DIR}/wyhash) \ No newline at end of file diff --git a/cpp/rs/rs.h b/cpp/rs/rs.h index b439a579..27cfda3d 100644 --- a/cpp/rs/rs.h +++ b/cpp/rs/rs.h @@ -16,11 +16,11 @@ // of Blocks. // // We also store the number of parity/data blocks in the two nibbles -// of a uint8_t, but this is a fairly irrelevant quirk of EggsFS, +// of a uint8_t, but this is a fairly irrelevant quirk of TernFS, // although it does nicely enforce that we do not go beyond what's // resonable for data storage purposes (rather than for error correction). -#ifndef EGGS_RS -#define EGGS_RS +#ifndef TERN_RS +#define TERN_RS #include #include diff --git a/cpp/shard/BlockServicesCacheDB.cpp b/cpp/shard/BlockServicesCacheDB.cpp index b2074a6b..d80dd81c 100644 --- a/cpp/shard/BlockServicesCacheDB.cpp +++ b/cpp/shard/BlockServicesCacheDB.cpp @@ -142,7 +142,7 @@ struct BlockServiceBody { switch (version()) { case 0: return V0_OFFSET; case 1: return V1_OFFSET; - default: throw EGGS_EXCEPTION("bad version %s", version()); + default: throw TERN_EXCEPTION("bad version %s", version()); } } diff --git a/cpp/shard/CMakeLists.txt b/cpp/shard/CMakeLists.txt index 598626a3..4d329eea 100644 --- a/cpp/shard/CMakeLists.txt +++ b/cpp/shard/CMakeLists.txt @@ -1,7 +1,7 @@ -include_directories(${eggsfs_SOURCE_DIR}/core ${eggsfs_SOURCE_DIR}/crc32c ${eggsfs_SOURCE_DIR}/wyhash) +include_directories(${ternfs_SOURCE_DIR}/core ${ternfs_SOURCE_DIR}/crc32c ${ternfs_SOURCE_DIR}/wyhash) add_library(shard Shard.cpp Shard.hpp ShardDB.cpp ShardDB.hpp ShardDBData.cpp ShardDBData.hpp BlockServicesCacheDB.hpp BlockServicesCacheDB.cpp) target_link_libraries(shard PRIVATE core) -add_executable(eggsshard eggsshard.cpp) -target_link_libraries(eggsshard PRIVATE core shard crc32c ${EGGSFS_JEMALLOC_LIBS}) +add_executable(ternshard ternshard.cpp) +target_link_libraries(ternshard PRIVATE core shard crc32c ${TERNFS_JEMALLOC_LIBS}) diff --git a/cpp/shard/Shard.cpp b/cpp/shard/Shard.cpp index c41d9092..4a8c3a8a 100644 --- a/cpp/shard/Shard.cpp +++ b/cpp/shard/Shard.cpp @@ -39,7 +39,7 @@ struct ShardReq { uint32_t protocol; ShardReqMsg msg; - EggsTime receivedAt; + TernTime receivedAt; IpPort clientAddr; int sockIx; // which sock to use to reply }; @@ -328,9 +328,9 @@ static void packShardResponse( ) { auto respKind = msg.body.kind(); auto reqKind = req.msg.body.kind(); - auto elapsed = eggsNow() - req.receivedAt; + auto elapsed = ternNow() - req.receivedAt; shared.timings[(int)reqKind].add(elapsed); - shared.errors[(int)reqKind].add( respKind != ShardMessageKind::ERROR ? EggsError::NO_ERROR : msg.body.getError()); + shared.errors[(int)reqKind].add( respKind != ShardMessageKind::ERROR ? TernError::NO_ERROR : msg.body.getError()); if (unlikely(dropArtificially)) { LOG_DEBUG(env, "artificially dropping response %s", msg.id); return; @@ -371,9 +371,9 @@ static void packCheckPointedShardResponse( ) { auto respKind = msg.body.resp.kind(); auto reqKind = req.msg.body.kind(); - auto elapsed = eggsNow() - req.receivedAt; + auto elapsed = ternNow() - req.receivedAt; shared.timings[(int)reqKind].add(elapsed); - shared.errors[(int)reqKind].add( respKind != ShardMessageKind::ERROR ? EggsError::NO_ERROR : msg.body.resp.getError()); + shared.errors[(int)reqKind].add( respKind != ShardMessageKind::ERROR ? TernError::NO_ERROR : msg.body.resp.getError()); if (unlikely(dropArtificially)) { LOG_DEBUG(env, "artificially dropping response %s", msg.id); return; @@ -583,7 +583,7 @@ private: return; } - auto t0 = eggsNow(); + auto t0 = ternNow(); LOG_DEBUG(_env, "received request id %s, kind %s, from %s", req.id, req.body.kind(), msg.clientAddr); @@ -623,7 +623,7 @@ private: return; } - auto t0 = eggsNow(); + auto t0 = ternNow(); LOG_DEBUG(_env, "parsed shard response from %s: %s", msg.clientAddr, resp); } @@ -725,10 +725,10 @@ struct std::hash { struct ProxyShardReq { ShardReq req; - EggsTime lastSent; - EggsTime created; - EggsTime gotLogIdx; - EggsTime finished; + TernTime lastSent; + TernTime created; + TernTime gotLogIdx; + TernTime finished; }; struct ShardWriter : Loop { @@ -775,7 +775,7 @@ private: static constexpr Duration PROXIED_REUQEST_TIMEOUT = 100_ms; std::unordered_map _proxyShardRequests; // outstanding proxied shard requests - std::unordered_map> _proxyCatchupRequests; // outstanding logsdb catchup requests to primary leader + std::unordered_map> _proxyCatchupRequests; // outstanding logsdb catchup requests to primary leader std::unordered_map> _proxiedResponses; // responses from primary location that we need to send back to client std::vector _proxyReadRequests; // currently processing proxied read requests @@ -808,7 +808,7 @@ public: _basePath(shared.options.dbDir), _shared(shared), _sender(UDPSenderConfig{.maxMsgSize = MAX_UDP_MTU}), - _packetDropRand(eggsNow().ns), + _packetDropRand(ternNow().ns), _outgoingPacketDropProbability(0), _maxWorkItemsAtOnce(LogsDB::IN_FLIGHT_APPEND_WINDOW * 10), _logsDB(shared.logsDB), @@ -853,7 +853,7 @@ public: return; } // catchup requests first as progressing state is more important then sending new requests - auto now = eggsNow(); + auto now = ternNow(); for(auto& req : _proxyCatchupRequests) { if (now - req.second.second < PROXIED_REUQEST_TIMEOUT) { continue; @@ -1009,7 +1009,7 @@ public: auto it = _proxiedResponses.find(logsDBEntry.idx.u64); if (it != _proxiedResponses.end()) { ALWAYS_ASSERT(_shared.options.isProxyLocation()); - it->second.second.finished = eggsNow(); + it->second.second.finished = ternNow(); logSlowProxyReq(it->second.second); resp.body = std::move(it->second.first); _proxiedResponses.erase(it); @@ -1104,7 +1104,7 @@ public: LogRespMsg resp; resp.id = request.request.msg.id; auto& readResp = resp.body.setLogRead(); - readResp.result = EggsError::NO_ERROR; + readResp.result = TernError::NO_ERROR; readResp.value.els = _logsDBEntries[i].value; _sender.prepareOutgoingMessage( _env, @@ -1179,7 +1179,7 @@ public: _logsDB.processIncomingMessages(_logsDBRequests,_logsDBResponses); _knownLastReleased = std::max(_knownLastReleased,_logsDB.getLastReleased()); _nextTimeout = _logsDB.getNextTimeout(); - auto now = eggsNow(); + auto now = ternNow(); // check for leadership state change and clean up internal state if (unlikely(_isLogsDBLeader != _logsDB.isLeader())) { @@ -1259,7 +1259,7 @@ public: auto& entry = _shardEntries.emplace_back(); auto err = _shared.shardDB.prepareLogEntry(req.msg.body, entry); - if (unlikely(err != EggsError::NO_ERROR)) { + if (unlikely(err != TernError::NO_ERROR)) { _shardEntries.pop_back(); // back out the log entry LOG_ERROR(_env, "error preparing log entry for request: %s from: %s err: %s", req.msg, req.clientAddr, err); // depending on protocol we need different kind of responses @@ -1463,7 +1463,7 @@ public: logsDBEntry.value.assign(buf.data, buf.cursor); } auto err = _logsDB.appendEntries(_logsDBEntries); - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); for (size_t i = 0; i < _shardEntries.size(); ++i) { ALWAYS_ASSERT(_logsDBEntries[i].idx == _shardEntries[i].idx); _inFlightEntries.emplace(_shardEntries[i].idx.u64, std::move(_shardEntries[i])); @@ -1504,7 +1504,7 @@ public: resp.id = snapshotReq.msg.id; auto err = _shared.sharedDB.snapshot(_basePath +"/snapshot-" + std::to_string(snapshotReq.msg.body.getShardSnapshot().snapshotId)); - if (err == EggsError::NO_ERROR) { + if (err == TernError::NO_ERROR) { resp.body.setShardSnapshot(); } else { resp.body.setError() = err; @@ -1593,7 +1593,7 @@ public: _replicaInfo = _shared.replicas; uint32_t pulled = _shared.writerRequestsQueue.pull(_workItems, _maxWorkItemsAtOnce, _nextTimeout); - auto start = eggsNow(); + auto start = ternNow(); if (likely(pulled > 0)) { LOG_DEBUG(_env, "pulled %s requests from write queue", pulled); _shared.pulledWriteRequests = _shared.pulledWriteRequests*0.95 + ((double)pulled)*0.05; @@ -1627,7 +1627,7 @@ public: } } logsDBStep(); - auto loopTime = eggsNow() - start; + auto loopTime = ternNow() - start; } }; @@ -1654,7 +1654,7 @@ public: Loop(logger, xmon, "reader"), _shared(shared), _sender(UDPSenderConfig{.maxMsgSize = MAX_UDP_MTU}), - _packetDropRand(eggsNow().ns), + _packetDropRand(ternNow().ns), _outgoingPacketDropProbability(0) { expandKey(ShardKey, _expandedShardKey); @@ -1675,7 +1675,7 @@ public: virtual void step() override { _requests.clear(); uint32_t pulled = _shared.readerRequestsQueue.pull(_requests, MAX_RECV_MSGS * 2); - auto start = eggsNow(); + auto start = ternNow(); if (likely(pulled > 0)) { LOG_DEBUG(_env, "pulled %s requests from read queue", pulled); _shared.pulledReadRequests = _shared.pulledReadRequests*0.95 + ((double)pulled)*0.05; @@ -1875,7 +1875,7 @@ public: } }; -static void logsDBstatsToMetrics(struct MetricsBuilder& metricsBuilder, const LogsDBStats& stats, ShardReplicaId shrid, uint8_t location, EggsTime now) { +static void logsDBstatsToMetrics(struct MetricsBuilder& metricsBuilder, const LogsDBStats& stats, ShardReplicaId shrid, uint8_t location, TernTime now) { { metricsBuilder.measurement("eggsfs_shard_logsdb"); metricsBuilder.tag("shard", shrid); @@ -2035,7 +2035,7 @@ public: } else { _env.clearAlert(_writeQueueAlert); } - auto now = eggsNow(); + auto now = ternNow(); for (ShardMessageKind kind : allShardMessageKind) { const ErrorCount& errs = _shared.errors[(int)kind]; for (int i = 0; i < errs.count.size(); i++) { @@ -2049,7 +2049,7 @@ public: if (i == 0) { _metricsBuilder.tag("error", "NO_ERROR"); } else { - _metricsBuilder.tag("error", (EggsError)i); + _metricsBuilder.tag("error", (TernError)i); } _metricsBuilder.fieldU64("count", count); _metricsBuilder.timestamp(now); diff --git a/cpp/shard/ShardDB.cpp b/cpp/shard/ShardDB.cpp index f38dd2c3..a676237a 100644 --- a/cpp/shard/ShardDB.cpp +++ b/cpp/shard/ShardDB.cpp @@ -89,7 +89,7 @@ // // TODO fill in results -static constexpr uint64_t EGGSFS_PAGE_SIZE = 4096; +static constexpr uint64_t TERNFS_PAGE_SIZE = 4096; static bool validName(const BincodeBytesRef& name) { if (name.size() == 0) { @@ -233,7 +233,7 @@ struct ShardDBImpl { shardInfoExists = true; auto shardInfo = ExternalValue::FromSlice(value); if (shardInfo().shardId() != _shid) { - throw EGGS_EXCEPTION("expected shard id %s, but found %s in DB", _shid, shardInfo().shardId()); + throw TERN_EXCEPTION("expected shard id %s, but found %s in DB", _shid, shardInfo().shardId()); } _secretKey = shardInfo().secretKey(); } @@ -304,26 +304,26 @@ struct ShardDBImpl { // ---------------------------------------------------------------- // read-only path - EggsError _statFile(const rocksdb::ReadOptions& options, const StatFileReq& req, StatFileResp& resp) { + TernError _statFile(const rocksdb::ReadOptions& options, const StatFileReq& req, StatFileResp& resp) { std::string fileValue; ExternalValue file; - EggsError err = _getFile(options, req.id, fileValue, file); - if (err != EggsError::NO_ERROR) { + TernError err = _getFile(options, req.id, fileValue, file); + if (err != TernError::NO_ERROR) { return err; } resp.mtime = file().mtime(); resp.atime = file().atime(); resp.size = file().fileSize(); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _statTransientFile(const rocksdb::ReadOptions& options, const StatTransientFileReq& req, StatTransientFileResp& resp) { + TernError _statTransientFile(const rocksdb::ReadOptions& options, const StatTransientFileReq& req, StatTransientFileResp& resp) { std::string fileValue; { auto k = InodeIdKey::Static(req.id); auto status = _db->Get(options, _transientCf, k.toSlice(), &fileValue); if (status.IsNotFound()) { - return EggsError::FILE_NOT_FOUND; + return TernError::FILE_NOT_FOUND; } ROCKS_DB_CHECKED(status); } @@ -331,30 +331,30 @@ struct ShardDBImpl { resp.mtime = body().mtime(); resp.size = body().fileSize(); resp.note = body().note(); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _statDirectory(const rocksdb::ReadOptions& options, const StatDirectoryReq& req, StatDirectoryResp& resp) { + TernError _statDirectory(const rocksdb::ReadOptions& options, const StatDirectoryReq& req, StatDirectoryResp& resp) { std::string dirValue; ExternalValue dir; // allowSnapshot=true, the caller can very easily detect if it's snapshot or not - EggsError err = _getDirectory(options, req.id, true, dirValue, dir); - if (err != EggsError::NO_ERROR) { + TernError err = _getDirectory(options, req.id, true, dirValue, dir); + if (err != TernError::NO_ERROR) { return err; } resp.mtime = dir().mtime(); resp.owner = dir().ownerId(); dir().info(resp.info); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _readDir(rocksdb::ReadOptions& options, const ReadDirReq& req, ReadDirResp& resp) { + TernError _readDir(rocksdb::ReadOptions& options, const ReadDirReq& req, ReadDirResp& resp) { // we don't want snapshot directories, so check for that early { std::string dirValue; ExternalValue dir; - EggsError err = _getDirectory(options, req.dirId, false, dirValue, dir); - if (err != EggsError::NO_ERROR) { + TernError err = _getDirectory(options, req.dirId, false, dirValue, dir); + if (err != TernError::NO_ERROR) { return err; } } @@ -397,7 +397,7 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(it->status()); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } // returns whether we're done @@ -410,7 +410,7 @@ struct ShardDBImpl { const rocksdb::Slice& edgeValue ) { auto& respEdge = resp.results.els.emplace_back(); - EggsTime time; + TernTime time; if (key().current()) { auto edge = ExternalValue::FromSlice(edgeValue); respEdge.current = key().current(); @@ -454,7 +454,7 @@ struct ShardDBImpl { } template - EggsError _fullReadDirSameName(const FullReadDirReq& req, rocksdb::ReadOptions& options, HashMode hashMode, FullReadDirResp& resp) { + TernError _fullReadDirSameName(const FullReadDirReq& req, rocksdb::ReadOptions& options, HashMode hashMode, FullReadDirResp& resp) { bool current = !!(req.flags&FULL_READ_DIR_CURRENT); uint64_t nameHash = EdgeKey::computeNameHash(hashMode, req.startName.ref()); @@ -476,10 +476,10 @@ struct ShardDBImpl { }; // begin current - if (current) { if (lookupCurrent()) { return EggsError::NO_ERROR; } } + if (current) { if (lookupCurrent()) { return TernError::NO_ERROR; } } // we looked at the current and we're going forward, nowhere to go from here. - if (current && forwards) { return EggsError::NO_ERROR; } + if (current && forwards) { return TernError::NO_ERROR; } // we're looking at snapshot edges now -- first pick the bounds (important to // minimize tripping over tombstones) @@ -487,7 +487,7 @@ struct ShardDBImpl { snapshotStart().setDirIdWithCurrent(req.dirId, false); snapshotStart().setNameHash(nameHash); snapshotStart().setName(req.startName.ref()); - snapshotStart().setCreationTime(req.startTime.ns ? req.startTime : (forwards ? 0 : EggsTime(~(uint64_t)0))); + snapshotStart().setCreationTime(req.startTime.ns ? req.startTime : (forwards ? 0 : TernTime(~(uint64_t)0))); StaticValue snapshotEnd; rocksdb::Slice snapshotEndSlice; snapshotEnd().setDirIdWithCurrent(req.dirId, false); @@ -522,16 +522,16 @@ struct ShardDBImpl { options.iterate_upper_bound = {}; // we were looking at the snapshots and we're going backwards, nowhere to go from here. - if (!forwards) { return EggsError::NO_ERROR; } + if (!forwards) { return TernError::NO_ERROR; } // end current - if (lookupCurrent()) { return EggsError::NO_ERROR; } + if (lookupCurrent()) { return TernError::NO_ERROR; } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } template - EggsError _fullReadDirNormal(const FullReadDirReq& req, rocksdb::ReadOptions& options, HashMode hashMode, FullReadDirResp& resp) { + TernError _fullReadDirNormal(const FullReadDirReq& req, rocksdb::ReadOptions& options, HashMode hashMode, FullReadDirResp& resp) { // this case is simpler, we just traverse all of it forwards or backwards. bool current = !!(req.flags&FULL_READ_DIR_CURRENT); @@ -540,7 +540,7 @@ struct ShardDBImpl { endKey().setDirIdWithCurrent(InodeId::FromU64Unchecked(req.dirId.u64 + (forwards ? 1 : -1)), !forwards); endKey().setNameHash(forwards ? 0 : ~(uint64_t)0); endKey().setName(forwards ? BincodeBytes().ref() : maxName.ref()); - endKey().setCreationTime(forwards ? 0 : EggsTime(~(uint64_t)0)); + endKey().setCreationTime(forwards ? 0 : TernTime(~(uint64_t)0)); rocksdb::Slice endKeySlice = endKey.toSlice(); if (forwards) { options.iterate_upper_bound = &endKeySlice; @@ -575,10 +575,10 @@ struct ShardDBImpl { } ROCKS_DB_CHECKED(it->status()); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _fullReadDir(rocksdb::ReadOptions& options, const FullReadDirReq& req, FullReadDirResp& resp) { + TernError _fullReadDir(rocksdb::ReadOptions& options, const FullReadDirReq& req, FullReadDirResp& resp) { bool sameName = !!(req.flags&FULL_READ_DIR_SAME_NAME); bool current = !!(req.flags&FULL_READ_DIR_CURRENT); bool forwards = !(req.flags&FULL_READ_DIR_BACKWARDS); @@ -592,8 +592,8 @@ struct ShardDBImpl { std::string dirValue; ExternalValue dir; // allowSnaphsot=true, we're in fullReadDir - EggsError err = _getDirectory(options, req.dirId, true, dirValue, dir); - if (err != EggsError::NO_ERROR) { + TernError err = _getDirectory(options, req.dirId, true, dirValue, dir); + if (err != TernError::NO_ERROR) { return err; } hashMode = dir().hashMode(); @@ -614,11 +614,11 @@ struct ShardDBImpl { } } - EggsError _lookup(const rocksdb::ReadOptions& options, const LookupReq& req, LookupResp& resp) { + TernError _lookup(const rocksdb::ReadOptions& options, const LookupReq& req, LookupResp& resp) { uint64_t nameHash; { - EggsError err = _getDirectoryAndHash(options, req.dirId, false, req.name.ref(), nameHash); - if (err != EggsError::NO_ERROR) { + TernError err = _getDirectoryAndHash(options, req.dirId, false, req.name.ref(), nameHash); + if (err != TernError::NO_ERROR) { return err; } } @@ -631,7 +631,7 @@ struct ShardDBImpl { std::string edgeValue; auto status = _db->Get(options, _edgesCf, reqKey.toSlice(), &edgeValue); if (status.IsNotFound()) { - return EggsError::NAME_NOT_FOUND; + return TernError::NAME_NOT_FOUND; } ROCKS_DB_CHECKED(status); ExternalValue edge(edgeValue); @@ -639,10 +639,10 @@ struct ShardDBImpl { resp.targetId = edge().targetIdWithLocked().id(); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _visitTransientFiles(const rocksdb::ReadOptions& options, const VisitTransientFilesReq& req, VisitTransientFilesResp& resp) { + TernError _visitTransientFiles(const rocksdb::ReadOptions& options, const VisitTransientFilesReq& req, VisitTransientFilesResp& resp) { resp.nextId = NULL_INODE_ID; { @@ -668,11 +668,11 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(it->status()); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } template - EggsError _visitInodes(const rocksdb::ReadOptions& options, rocksdb::ColumnFamilyHandle* cf, const Req& req, Resp& resp) { + TernError _visitInodes(const rocksdb::ReadOptions& options, rocksdb::ColumnFamilyHandle* cf, const Req& req, Resp& resp) { resp.nextId = NULL_INODE_ID; int budget = pickMtu(req.mtu) - ShardRespMsg::STATIC_SIZE - Resp::STATIC_SIZE; @@ -696,16 +696,16 @@ struct ShardDBImpl { resp.ids.els.pop_back(); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _visitDirectories(const rocksdb::ReadOptions& options, const VisitDirectoriesReq& req, VisitDirectoriesResp& resp) { + TernError _visitDirectories(const rocksdb::ReadOptions& options, const VisitDirectoriesReq& req, VisitDirectoriesResp& resp) { return _visitInodes(options, _directoriesCf, req, resp); } - EggsError _localFileSpans(rocksdb::ReadOptions& options, const LocalFileSpansReq& req, LocalFileSpansResp& resp) { + TernError _localFileSpans(rocksdb::ReadOptions& options, const LocalFileSpansReq& req, LocalFileSpansResp& resp) { if (req.fileId.type() != InodeType::FILE && req.fileId.type() != InodeType::SYMLINK) { - return EggsError::BLOCK_IO_ERROR_FILE; + return TernError::BLOCK_IO_ERROR_FILE; } StaticValue lowerKey; lowerKey().setFileId(InodeId::FromU64Unchecked(req.fileId.u64 - 1)); @@ -815,17 +815,17 @@ struct ShardDBImpl { if (resp.spans.els.size() == 0) { std::string fileValue; ExternalValue file; - EggsError err = _getFile(options, req.fileId, fileValue, file); - if (err != EggsError::NO_ERROR) { + TernError err = _getFile(options, req.fileId, fileValue, file); + if (err != TernError::NO_ERROR) { // might be a transient file, let's check bool isTransient = false; - if (err == EggsError::FILE_NOT_FOUND) { + if (err == TernError::FILE_NOT_FOUND) { std::string transientFileValue; ExternalValue transientFile; - EggsError transError = _getTransientFile(options, 0, true, req.fileId, transientFileValue, transientFile); - if (transError == EggsError::NO_ERROR) { + TernError transError = _getTransientFile(options, 0, true, req.fileId, transientFileValue, transientFile); + if (transError == TernError::NO_ERROR) { isTransient = true; - } else if (transError != EggsError::FILE_NOT_FOUND) { + } else if (transError != TernError::FILE_NOT_FOUND) { LOG_INFO(_env, "Dropping error gotten when doing fallback transient lookup for id %s: %s", req.fileId, transError); } } @@ -835,12 +835,12 @@ struct ShardDBImpl { } } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _fileSpans(rocksdb::ReadOptions& options, const FileSpansReq& req, FileSpansResp& resp) { + TernError _fileSpans(rocksdb::ReadOptions& options, const FileSpansReq& req, FileSpansResp& resp) { if (req.fileId.type() != InodeType::FILE) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } StaticValue lowerKey; lowerKey().setFileId(InodeId::FromU64Unchecked(req.fileId.u64 - 1)); @@ -946,17 +946,17 @@ struct ShardDBImpl { if (resp.spans.els.size() == 0) { std::string fileValue; ExternalValue file; - EggsError err = _getFile(options, req.fileId, fileValue, file); - if (err != EggsError::NO_ERROR) { + TernError err = _getFile(options, req.fileId, fileValue, file); + if (err != TernError::NO_ERROR) { // might be a transient file, let's check bool isTransient = false; - if (err == EggsError::FILE_NOT_FOUND) { + if (err == TernError::FILE_NOT_FOUND) { std::string transientFileValue; ExternalValue transientFile; - EggsError transError = _getTransientFile(options, 0, true, req.fileId, transientFileValue, transientFile); - if (transError == EggsError::NO_ERROR) { + TernError transError = _getTransientFile(options, 0, true, req.fileId, transientFileValue, transientFile); + if (transError == TernError::NO_ERROR) { isTransient = true; - } else if (transError != EggsError::FILE_NOT_FOUND) { + } else if (transError != TernError::FILE_NOT_FOUND) { LOG_INFO(_env, "Dropping error gotten when doing fallback transient lookup for id %s: %s", req.fileId, transError); } } @@ -966,10 +966,10 @@ struct ShardDBImpl { } } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _blockServiceFiles(rocksdb::ReadOptions& options, const BlockServiceFilesReq& req, BlockServiceFilesResp& resp) { + TernError _blockServiceFiles(rocksdb::ReadOptions& options, const BlockServiceFilesReq& req, BlockServiceFilesResp& resp) { int maxFiles = (DEFAULT_UDP_MTU - ShardRespMsg::STATIC_SIZE - BlockServiceFilesResp::STATIC_SIZE) / 8; resp.fileIds.els.reserve(maxFiles); @@ -997,17 +997,17 @@ struct ShardDBImpl { break; } ROCKS_DB_CHECKED(it->status()); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _visitFiles(const rocksdb::ReadOptions& options, const VisitFilesReq& req, VisitFilesResp& resp) { + TernError _visitFiles(const rocksdb::ReadOptions& options, const VisitFilesReq& req, VisitFilesResp& resp) { return _visitInodes(options, _filesCf, req, resp); } uint64_t read(const ShardReqContainer& req, ShardRespContainer& resp) { LOG_DEBUG(_env, "processing read-only request of kind %s", req.kind()); - auto err = EggsError::NO_ERROR; + auto err = TernError::NO_ERROR; resp.clear(); auto snapshot = _getCurrentReadSnapshot(); @@ -1052,10 +1052,10 @@ struct ShardDBImpl { err = _visitFiles(options, req.getVisitFiles(), resp.setVisitFiles()); break; default: - throw EGGS_EXCEPTION("bad read-only shard message kind %s", req.kind()); + throw TERN_EXCEPTION("bad read-only shard message kind %s", req.kind()); } - if (unlikely(err != EggsError::NO_ERROR)) { + if (unlikely(err != TernError::NO_ERROR)) { resp.setError() = err; } else { ALWAYS_ASSERT(req.kind() == resp.kind()); @@ -1067,38 +1067,38 @@ struct ShardDBImpl { // ---------------------------------------------------------------- // log preparation - EggsError _prepareConstructFile(EggsTime time, const ConstructFileReq& req, ConstructFileEntry& entry) { + TernError _prepareConstructFile(TernTime time, const ConstructFileReq& req, ConstructFileEntry& entry) { if (req.type != (uint8_t)InodeType::FILE && req.type != (uint8_t)InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } entry.type = req.type; entry.note = req.note; entry.deadlineTime = time + _transientDeadlineInterval; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _checkTransientFileCookie(InodeId id, std::array cookie) { + TernError _checkTransientFileCookie(InodeId id, std::array cookie) { if (id.type() != InodeType::FILE && id.type() != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } std::array expectedCookie; if (cookie != _calcCookie(id)) { - return EggsError::BAD_COOKIE; + return TernError::BAD_COOKIE; } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareLinkFile(EggsTime time, const LinkFileReq& req, LinkFileEntry& entry) { + TernError _prepareLinkFile(TernTime time, const LinkFileReq& req, LinkFileEntry& entry) { // some early, preliminary checks if (req.ownerId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (req.ownerId.shard() != _shid || req.fileId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } - EggsError err = _checkTransientFileCookie(req.fileId, req.cookie.data); - if (err != EggsError::NO_ERROR) { + TernError err = _checkTransientFileCookie(req.fileId, req.cookie.data); + if (err != TernError::NO_ERROR) { return err; } @@ -1106,205 +1106,205 @@ struct ShardDBImpl { entry.name = req.name; entry.ownerId = req.ownerId; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } template - EggsError _prepareSameDirectoryRename(EggsTime time, const Req& req, Entry& entry) { + TernError _prepareSameDirectoryRename(TernTime time, const Req& req, Entry& entry) { if (req.dirId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (DontAllowDifferentNames && (req.oldName == req.newName)) { - return EggsError::SAME_SOURCE_AND_DESTINATION; + return TernError::SAME_SOURCE_AND_DESTINATION; } if (!validName(req.newName.ref())) { - return EggsError::BAD_NAME; + return TernError::BAD_NAME; } if (req.dirId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } entry.dirId = req.dirId; entry.oldCreationTime = req.oldCreationTime; entry.oldName = req.oldName; entry.newName = req.newName; entry.targetId = req.targetId; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareSoftUnlinkFile(EggsTime time, const SoftUnlinkFileReq& req, SoftUnlinkFileEntry& entry) { + TernError _prepareSoftUnlinkFile(TernTime time, const SoftUnlinkFileReq& req, SoftUnlinkFileEntry& entry) { if (req.ownerId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (req.fileId.type() != InodeType::FILE && req.fileId.type() != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.ownerId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } entry.ownerId = req.ownerId; entry.fileId = req.fileId; entry.name = req.name; entry.creationTime = req.creationTime; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareCreateDirectoryInode(EggsTime time, const CreateDirectoryInodeReq& req, CreateDirectoryInodeEntry& entry) { + TernError _prepareCreateDirectoryInode(TernTime time, const CreateDirectoryInodeReq& req, CreateDirectoryInodeEntry& entry) { if (req.id.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } if (req.id.type() != InodeType::DIRECTORY || req.ownerId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } entry.id = req.id; entry.ownerId = req.ownerId; entry.info = req.info; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareCreateLockedCurrentEdge(EggsTime time, const CreateLockedCurrentEdgeReq& req, CreateLockedCurrentEdgeEntry& entry) { + TernError _prepareCreateLockedCurrentEdge(TernTime time, const CreateLockedCurrentEdgeReq& req, CreateLockedCurrentEdgeEntry& entry) { if (req.dirId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (req.dirId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } if (!validName(req.name.ref())) { - return EggsError::BAD_NAME; + return TernError::BAD_NAME; } ALWAYS_ASSERT(req.targetId != NULL_INODE_ID); // proper error entry.dirId = req.dirId; entry.targetId = req.targetId; entry.name = req.name; entry.oldCreationTime = req.oldCreationTime; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareUnlockCurrentEdge(EggsTime time, const UnlockCurrentEdgeReq& req, UnlockCurrentEdgeEntry& entry) { + TernError _prepareUnlockCurrentEdge(TernTime time, const UnlockCurrentEdgeReq& req, UnlockCurrentEdgeEntry& entry) { if (req.dirId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (req.dirId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } entry.dirId = req.dirId; entry.targetId = req.targetId; entry.name = req.name; entry.wasMoved = req.wasMoved; entry.creationTime = req.creationTime; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareLockCurrentEdge(EggsTime time, const LockCurrentEdgeReq& req, LockCurrentEdgeEntry& entry) { + TernError _prepareLockCurrentEdge(TernTime time, const LockCurrentEdgeReq& req, LockCurrentEdgeEntry& entry) { if (req.dirId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (req.dirId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } entry.dirId = req.dirId; entry.name = req.name; entry.targetId = req.targetId; entry.creationTime = req.creationTime; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareRemoveDirectoryOwner(EggsTime time, const RemoveDirectoryOwnerReq& req, RemoveDirectoryOwnerEntry& entry) { + TernError _prepareRemoveDirectoryOwner(TernTime time, const RemoveDirectoryOwnerReq& req, RemoveDirectoryOwnerEntry& entry) { if (req.dirId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (req.dirId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } ALWAYS_ASSERT(req.dirId != ROOT_DIR_INODE_ID); // TODO proper error entry.dirId = req.dirId; entry.info = req.info; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareRemoveInode(EggsTime time, const RemoveInodeReq& req, RemoveInodeEntry& entry) { + TernError _prepareRemoveInode(TernTime time, const RemoveInodeReq& req, RemoveInodeEntry& entry) { if (req.id.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } if (req.id == ROOT_DIR_INODE_ID) { - return EggsError::CANNOT_REMOVE_ROOT_DIRECTORY; + return TernError::CANNOT_REMOVE_ROOT_DIRECTORY; } entry.id = req.id; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareSetDirectoryOwner(EggsTime time, const SetDirectoryOwnerReq& req, SetDirectoryOwnerEntry& entry) { + TernError _prepareSetDirectoryOwner(TernTime time, const SetDirectoryOwnerReq& req, SetDirectoryOwnerEntry& entry) { if (req.dirId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (req.dirId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } if (req.ownerId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } entry.dirId = req.dirId; entry.ownerId = req.ownerId; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareSetDirectoryInfo(EggsTime time, const SetDirectoryInfoReq& req, SetDirectoryInfoEntry& entry) { + TernError _prepareSetDirectoryInfo(TernTime time, const SetDirectoryInfoReq& req, SetDirectoryInfoEntry& entry) { if (req.id.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (req.id.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } entry.dirId = req.id; entry.info = req.info; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareRemoveNonOwnedEdge(EggsTime time, const RemoveNonOwnedEdgeReq& req, RemoveNonOwnedEdgeEntry& entry) { + TernError _prepareRemoveNonOwnedEdge(TernTime time, const RemoveNonOwnedEdgeReq& req, RemoveNonOwnedEdgeEntry& entry) { if (req.dirId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (req.dirId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } entry.dirId = req.dirId; entry.creationTime = req.creationTime; entry.name = req.name; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareSameShardHardFileUnlink(EggsTime time, const SameShardHardFileUnlinkReq& req, SameShardHardFileUnlinkEntry& entry) { + TernError _prepareSameShardHardFileUnlink(TernTime time, const SameShardHardFileUnlinkReq& req, SameShardHardFileUnlinkEntry& entry) { if (req.ownerId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (req.targetId.type() != InodeType::FILE && req.targetId.type() != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.ownerId.shard() != _shid || req.targetId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } entry.ownerId = req.ownerId; entry.targetId = req.targetId; entry.name = req.name; entry.creationTime = req.creationTime; entry.deadlineTime = time; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareRemoveSpanInitiate(EggsTime time, const RemoveSpanInitiateReq& req, RemoveSpanInitiateEntry& entry) { + TernError _prepareRemoveSpanInitiate(TernTime time, const RemoveSpanInitiateReq& req, RemoveSpanInitiateEntry& entry) { if (req.fileId.type() != InodeType::FILE && req.fileId.type() != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.fileId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } { - EggsError err = _checkTransientFileCookie(req.fileId, req.cookie.data); - if (err != EggsError::NO_ERROR) { + TernError err = _checkTransientFileCookie(req.fileId, req.cookie.data); + if (err != TernError::NO_ERROR) { return err; } } entry.fileId = req.fileId; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } bool _checkSpanBody(const AddSpanInitiateReq& req) { @@ -1374,16 +1374,16 @@ struct ShardDBImpl { return false; } - EggsError _prepareAddInlineSpan(EggsTime time, const AddInlineSpanReq& req, AddInlineSpanEntry& entry) { + TernError _prepareAddInlineSpan(TernTime time, const AddInlineSpanReq& req, AddInlineSpanEntry& entry) { if (req.fileId.type() != InodeType::FILE && req.fileId.type() != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.fileId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } { - EggsError err = _checkTransientFileCookie(req.fileId, req.cookie.data); - if (err != EggsError::NO_ERROR) { + TernError err = _checkTransientFileCookie(req.fileId, req.cookie.data); + if (err != TernError::NO_ERROR) { return err; } } @@ -1391,28 +1391,28 @@ struct ShardDBImpl { if (req.storageClass == EMPTY_STORAGE) { if (req.size != 0) { LOG_DEBUG(_env, "empty span has size != 0: %s", req.size); - return EggsError::BAD_SPAN_BODY; + return TernError::BAD_SPAN_BODY; } } else if (req.storageClass == INLINE_STORAGE) { if (req.size == 0 || req.size < req.body.size()) { LOG_DEBUG(_env, "inline span has req.size=%s == 0 || req.size=%s < req.body.size()=%s", req.size, req.size, (int)req.body.size()); - return EggsError::BAD_SPAN_BODY; + return TernError::BAD_SPAN_BODY; } } else { LOG_DEBUG(_env, "inline span has bad storage class %s", req.storageClass); - return EggsError::BAD_SPAN_BODY; + return TernError::BAD_SPAN_BODY; } - if (req.byteOffset%EGGSFS_PAGE_SIZE != 0) { - RAISE_ALERT_APP_TYPE(_env, XmonAppType::DAYTIME, "req.byteOffset=%s is not a multiple of PAGE_SIZE=%s", req.byteOffset, EGGSFS_PAGE_SIZE); - return EggsError::BAD_SPAN_BODY; + if (req.byteOffset%TERNFS_PAGE_SIZE != 0) { + RAISE_ALERT_APP_TYPE(_env, XmonAppType::DAYTIME, "req.byteOffset=%s is not a multiple of PAGE_SIZE=%s", req.byteOffset, TERNFS_PAGE_SIZE); + return TernError::BAD_SPAN_BODY; } uint32_t expectedCrc = crc32c(0, req.body.data(), req.body.size()); expectedCrc = crc32c_zero_extend(expectedCrc, req.size - req.body.size()); if (expectedCrc != req.crc.u32) { LOG_DEBUG(_env, "inline span expected CRC %s, got %s", Crc(expectedCrc), req.crc); - return EggsError::BAD_SPAN_BODY; + return TernError::BAD_SPAN_BODY; } entry.fileId = req.fileId; @@ -1422,36 +1422,36 @@ struct ShardDBImpl { entry.body = req.body; entry.crc = req.crc; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareAddSpanInitiate(const rocksdb::ReadOptions& options, EggsTime time, const AddSpanAtLocationInitiateReq& request, InodeId reference, AddSpanAtLocationInitiateEntry& entry) { + TernError _prepareAddSpanInitiate(const rocksdb::ReadOptions& options, TernTime time, const AddSpanAtLocationInitiateReq& request, InodeId reference, AddSpanAtLocationInitiateEntry& entry) { auto& req = request.req.req; if (req.fileId.type() != InodeType::FILE && req.fileId.type() != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (reference.type() != InodeType::FILE && reference.type() != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.fileId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } { - EggsError err = _checkTransientFileCookie(req.fileId, req.cookie.data); - if (err != EggsError::NO_ERROR) { + TernError err = _checkTransientFileCookie(req.fileId, req.cookie.data); + if (err != TernError::NO_ERROR) { return err; } } if (req.storageClass == INLINE_STORAGE || req.storageClass == EMPTY_STORAGE) { LOG_DEBUG(_env, "bad storage class %s for blocks span", (int)req.storageClass); - return EggsError::BAD_SPAN_BODY; + return TernError::BAD_SPAN_BODY; } - if (req.byteOffset%EGGSFS_PAGE_SIZE != 0 || req.cellSize%EGGSFS_PAGE_SIZE != 0) { - RAISE_ALERT_APP_TYPE(_env, XmonAppType::DAYTIME, "req.byteOffset=%s or cellSize=%s is not a multiple of PAGE_SIZE=%s", req.byteOffset, req.cellSize, EGGSFS_PAGE_SIZE); - return EggsError::BAD_SPAN_BODY; + if (req.byteOffset%TERNFS_PAGE_SIZE != 0 || req.cellSize%TERNFS_PAGE_SIZE != 0) { + RAISE_ALERT_APP_TYPE(_env, XmonAppType::DAYTIME, "req.byteOffset=%s or cellSize=%s is not a multiple of PAGE_SIZE=%s", req.byteOffset, req.cellSize, TERNFS_PAGE_SIZE); + return TernError::BAD_SPAN_BODY; } if (!_checkSpanBody(req)) { - return EggsError::BAD_SPAN_BODY; + return TernError::BAD_SPAN_BODY; } // start filling in entry @@ -1595,7 +1595,7 @@ struct ShardDBImpl { } // If we still couldn't find enough block services, we're toast. if (pickedBlockServices.size() < req.parity.blocks()) { - return EggsError::COULD_NOT_PICK_BLOCK_SERVICES; + return TernError::COULD_NOT_PICK_BLOCK_SERVICES; } // Now generate the blocks entry.bodyBlocks.els.resize(req.parity.blocks()); @@ -1610,99 +1610,99 @@ struct ShardDBImpl { } } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareAddSpanCertify(EggsTime time, const AddSpanCertifyReq& req, AddSpanCertifyEntry& entry) { + TernError _prepareAddSpanCertify(TernTime time, const AddSpanCertifyReq& req, AddSpanCertifyEntry& entry) { if (req.fileId.type() != InodeType::FILE && req.fileId.type() != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.fileId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } { - EggsError err = _checkTransientFileCookie(req.fileId, req.cookie.data); - if (err != EggsError::NO_ERROR) { + TernError err = _checkTransientFileCookie(req.fileId, req.cookie.data); + if (err != TernError::NO_ERROR) { return err; } } entry.fileId = req.fileId; entry.byteOffset = req.byteOffset; entry.proofs = req.proofs; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareMakeFileTransient(EggsTime time, const MakeFileTransientReq& req, MakeFileTransientEntry& entry) { + TernError _prepareMakeFileTransient(TernTime time, const MakeFileTransientReq& req, MakeFileTransientEntry& entry) { if (req.id.type() != InodeType::FILE && req.id.type() != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.id.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } entry.id = req.id; entry.note = req.note; entry.deadlineTime = time; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareScrapTransientFile(EggsTime time, const ScrapTransientFileReq& req, ScrapTransientFileEntry& entry) { + TernError _prepareScrapTransientFile(TernTime time, const ScrapTransientFileReq& req, ScrapTransientFileEntry& entry) { if (req.id.type() != InodeType::FILE) { - return EggsError::FILE_IS_NOT_TRANSIENT; + return TernError::FILE_IS_NOT_TRANSIENT; } if (req.id.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } - EggsError err = _checkTransientFileCookie(req.id, req.cookie.data); - if (err != EggsError::NO_ERROR) { + TernError err = _checkTransientFileCookie(req.id, req.cookie.data); + if (err != TernError::NO_ERROR) { return err; } entry.id = req.id; entry.deadlineTime = time; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareRemoveSpanCertify(EggsTime time, const RemoveSpanCertifyReq& req, RemoveSpanCertifyEntry& entry) { + TernError _prepareRemoveSpanCertify(TernTime time, const RemoveSpanCertifyReq& req, RemoveSpanCertifyEntry& entry) { if (req.fileId.type() != InodeType::FILE && req.fileId.type() != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.fileId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } { - EggsError err = _checkTransientFileCookie(req.fileId, req.cookie.data); - if (err != EggsError::NO_ERROR) { + TernError err = _checkTransientFileCookie(req.fileId, req.cookie.data); + if (err != TernError::NO_ERROR) { return err; } } entry.fileId = req.fileId; entry.byteOffset = req.byteOffset; entry.proofs = req.proofs; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareRemoveOwnedSnapshotFileEdge(EggsTime time, const RemoveOwnedSnapshotFileEdgeReq& req, RemoveOwnedSnapshotFileEdgeEntry& entry) { + TernError _prepareRemoveOwnedSnapshotFileEdge(TernTime time, const RemoveOwnedSnapshotFileEdgeReq& req, RemoveOwnedSnapshotFileEdgeEntry& entry) { if (req.ownerId.type() != InodeType::DIRECTORY) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } if (req.ownerId.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } if (req.targetId.type () != InodeType::FILE && req.targetId.type () != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } entry.ownerId = req.ownerId; entry.targetId = req.targetId; entry.creationTime = req.creationTime; entry.name = req.name; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareSwapBlocks(EggsTime time, const SwapBlocksReq& req, SwapBlocksEntry& entry) { + TernError _prepareSwapBlocks(TernTime time, const SwapBlocksReq& req, SwapBlocksEntry& entry) { if (req.fileId1.type() == InodeType::DIRECTORY || req.fileId2.type() == InodeType::DIRECTORY) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.fileId1.shard() != _shid || req.fileId2.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } ALWAYS_ASSERT(req.fileId1 != req.fileId2); entry.fileId1 = req.fileId1; @@ -1711,15 +1711,15 @@ struct ShardDBImpl { entry.fileId2 = req.fileId2; entry.byteOffset2 = req.byteOffset2; entry.blockId2 = req.blockId2; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareSwapSpans(EggsTime time, const SwapSpansReq& req, SwapSpansEntry& entry) { + TernError _prepareSwapSpans(TernTime time, const SwapSpansReq& req, SwapSpansEntry& entry) { if (req.fileId1.type() == InodeType::DIRECTORY || req.fileId2.type() == InodeType::DIRECTORY) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.fileId1.shard() != _shid || req.fileId2.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } ALWAYS_ASSERT(req.fileId1 != req.fileId2); entry.fileId1 = req.fileId1; @@ -1728,15 +1728,15 @@ struct ShardDBImpl { entry.fileId2 = req.fileId2; entry.byteOffset2 = req.byteOffset2; entry.blocks2 = req.blocks2; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareAddSpanLocation(EggsTime time, const AddSpanLocationReq& req, AddSpanLocationEntry& entry) { + TernError _prepareAddSpanLocation(TernTime time, const AddSpanLocationReq& req, AddSpanLocationEntry& entry) { if (req.fileId1.type() == InodeType::DIRECTORY || req.fileId2.type() == InodeType::DIRECTORY) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.fileId1.shard() != _shid || req.fileId2.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } ALWAYS_ASSERT(req.fileId1 != req.fileId2); entry.fileId1 = req.fileId1; @@ -1744,22 +1744,22 @@ struct ShardDBImpl { entry.blocks1 = req.blocks1; entry.fileId2 = req.fileId2; entry.byteOffset2 = req.byteOffset2; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareMoveSpan(EggsTime time, const MoveSpanReq& req, MoveSpanEntry& entry) { + TernError _prepareMoveSpan(TernTime time, const MoveSpanReq& req, MoveSpanEntry& entry) { if (req.fileId1.type() == InodeType::DIRECTORY || req.fileId2.type() == InodeType::DIRECTORY) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.fileId1.shard() != _shid || req.fileId2.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } - EggsError err = _checkTransientFileCookie(req.fileId1, req.cookie1.data); - if (err != EggsError::NO_ERROR) { + TernError err = _checkTransientFileCookie(req.fileId1, req.cookie1.data); + if (err != TernError::NO_ERROR) { return err; } err = _checkTransientFileCookie(req.fileId2, req.cookie2.data); - if (err != EggsError::NO_ERROR) { + if (err != TernError::NO_ERROR) { return err; } entry.fileId1 = req.fileId1; @@ -1769,34 +1769,34 @@ struct ShardDBImpl { entry.cookie2 = req.cookie2; entry.byteOffset2 = req.byteOffset2; entry.spanSize = req.spanSize; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareSetTime(EggsTime time, const SetTimeReq& req, SetTimeEntry& entry) { + TernError _prepareSetTime(TernTime time, const SetTimeReq& req, SetTimeEntry& entry) { if (req.id.type() == InodeType::DIRECTORY) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } if (req.id.shard() != _shid) { - return EggsError::BAD_SHARD; + return TernError::BAD_SHARD; } entry.id = req.id; entry.atime = req.atime; entry.mtime = req.mtime; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _prepareRemoveZeroBlockServiceFiles(EggsTime time, const RemoveZeroBlockServiceFilesReq& req, RemoveZeroBlockServiceFilesEntry& entry) { + TernError _prepareRemoveZeroBlockServiceFiles(TernTime time, const RemoveZeroBlockServiceFilesReq& req, RemoveZeroBlockServiceFilesEntry& entry) { entry.startBlockService = req.startBlockService; entry.startFile = req.startFile; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError prepareLogEntry(const ShardReqContainer& req, ShardLogEntry& logEntry) { + TernError prepareLogEntry(const ShardReqContainer& req, ShardLogEntry& logEntry) { LOG_DEBUG(_env, "processing write request of kind %s", req.kind()); logEntry.clear(); - auto err = EggsError::NO_ERROR; + auto err = TernError::NO_ERROR; - EggsTime time = eggsNow(); + TernTime time = ternNow(); logEntry.time = time; auto& logEntryBody = logEntry.body; @@ -1910,10 +1910,10 @@ struct ShardDBImpl { err = _prepareAddSpanLocation(time, req.getAddSpanLocation(), logEntryBody.setAddSpanLocation()); break; default: - throw EGGS_EXCEPTION("bad write shard message kind %s", req.kind()); + throw TERN_EXCEPTION("bad write shard message kind %s", req.kind()); } - if (err == EggsError::NO_ERROR) { + if (err == TernError::NO_ERROR) { LOG_DEBUG(_env, "prepared log entry of kind %s, for request of kind %s", logEntryBody.kind(), req.kind()); LOG_TRACE(_env, "log entry body: %s", logEntryBody); } else { @@ -1935,7 +1935,7 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(batch.Put({}, shardMetadataKey(&LAST_APPLIED_LOG_ENTRY_KEY), v.toSlice())); } - EggsError _applyConstructFile(rocksdb::WriteBatch& batch, EggsTime time, const ConstructFileEntry& entry, ConstructFileResp& resp) { + TernError _applyConstructFile(rocksdb::WriteBatch& batch, TernTime time, const ConstructFileEntry& entry, ConstructFileResp& resp) { const auto nextFileId = [this, &batch](const ShardMetadataKey* key) -> InodeId { std::string value; ROCKS_DB_CHECKED(_db->Get({}, shardMetadataKey(key), &value)); @@ -1950,7 +1950,7 @@ struct ShardDBImpl { } else if (entry.type == (uint8_t)InodeType::SYMLINK) { id = nextFileId(&NEXT_SYMLINK_ID_KEY); } else { - throw EGGS_EXCEPTION("Bad type %s", (int)entry.type); + throw TERN_EXCEPTION("Bad type %s", (int)entry.type); } // write to rocks @@ -1968,20 +1968,20 @@ struct ShardDBImpl { resp.id = id; resp.cookie.data = _calcCookie(resp.id); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyLinkFile(rocksdb::WriteBatch& batch, EggsTime time, const LinkFileEntry& entry, LinkFileResp& resp) { + TernError _applyLinkFile(rocksdb::WriteBatch& batch, TernTime time, const LinkFileEntry& entry, LinkFileResp& resp) { std::string fileValue; ExternalValue transientFile; { - EggsError err = _getTransientFile({}, time, false /*allowPastDeadline*/, entry.fileId, fileValue, transientFile); - if (err == EggsError::FILE_NOT_FOUND) { + TernError err = _getTransientFile({}, time, false /*allowPastDeadline*/, entry.fileId, fileValue, transientFile); + if (err == TernError::FILE_NOT_FOUND) { // Check if the file has already been linked to simplify the life of retrying // clients. uint64_t nameHash; // Return original error if the dir doens't exist, since this is some recovery mechanism anyway - if (_getDirectoryAndHash({}, entry.ownerId, false /*allowSnapshot*/, entry.name.ref(), nameHash) != EggsError::NO_ERROR) { + if (_getDirectoryAndHash({}, entry.ownerId, false /*allowSnapshot*/, entry.name.ref(), nameHash) != TernError::NO_ERROR) { LOG_DEBUG(_env, "could not find directory after FILE_NOT_FOUND for link file"); return err; } @@ -2004,13 +2004,13 @@ struct ShardDBImpl { return err; } resp.creationTime = edge().creationTime(); - return EggsError::NO_ERROR; - } else if (err != EggsError::NO_ERROR) { + return TernError::NO_ERROR; + } else if (err != TernError::NO_ERROR) { return err; } } if (transientFile().lastSpanState() != SpanState::CLEAN) { - return EggsError::LAST_SPAN_STATE_NOT_CLEAN; + return TernError::LAST_SPAN_STATE_NOT_CLEAN; } // move from transient to non-transient. @@ -2025,19 +2025,19 @@ struct ShardDBImpl { // create edge in owner. { - EggsError err = ShardDBImpl::_createCurrentEdge(time, batch, entry.ownerId, entry.name, entry.fileId, false, 0, resp.creationTime); - if (err != EggsError::NO_ERROR) { + TernError err = ShardDBImpl::_createCurrentEdge(time, batch, entry.ownerId, entry.name, entry.fileId, false, 0, resp.creationTime); + if (err != TernError::NO_ERROR) { return err; } } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _initiateDirectoryModification(EggsTime time, bool allowSnapshot, rocksdb::WriteBatch& batch, InodeId dirId, std::string& dirValue, ExternalValue& dir) { + TernError _initiateDirectoryModification(TernTime time, bool allowSnapshot, rocksdb::WriteBatch& batch, InodeId dirId, std::string& dirValue, ExternalValue& dir) { ExternalValue tmpDir; - EggsError err = _getDirectory({}, dirId, allowSnapshot, dirValue, tmpDir); - if (err != EggsError::NO_ERROR) { + TernError err = _getDirectory({}, dirId, allowSnapshot, dirValue, tmpDir); + if (err != TernError::NO_ERROR) { return err; } @@ -2046,7 +2046,7 @@ struct ShardDBImpl { // This should be very uncommon. if (tmpDir().mtime() >= time) { RAISE_ALERT_APP_TYPE(_env, XmonAppType::DAYTIME, "trying to modify dir %s going backwards in time, dir mtime is %s, log entry time is %s", dirId, tmpDir().mtime(), time); - return EggsError::MTIME_IS_TOO_RECENT; + return TernError::MTIME_IS_TOO_RECENT; } // Modify the directory mtime @@ -2057,19 +2057,19 @@ struct ShardDBImpl { } dir = tmpDir; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } // When we just want to compute the hash of something when modifying the dir - EggsError _initiateDirectoryModificationAndHash(EggsTime time, bool allowSnapshot, rocksdb::WriteBatch& batch, InodeId dirId, const BincodeBytesRef& name, uint64_t& nameHash) { + TernError _initiateDirectoryModificationAndHash(TernTime time, bool allowSnapshot, rocksdb::WriteBatch& batch, InodeId dirId, const BincodeBytesRef& name, uint64_t& nameHash) { ExternalValue dir; std::string dirValue; - EggsError err = _initiateDirectoryModification(time, allowSnapshot, batch, dirId, dirValue, dir); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateDirectoryModification(time, allowSnapshot, batch, dirId, dirValue, dir); + if (err != TernError::NO_ERROR) { return err; } nameHash = EdgeKey::computeNameHash(dir().hashMode(), name); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } // Note that we cannot expose an API which allows us to create non-locked current edges, @@ -2077,11 +2077,11 @@ struct ShardDBImpl { // // The creation time might be different than the current time because we might find it // in an existing edge. - EggsError _createCurrentEdge( - EggsTime logEntryTime, rocksdb::WriteBatch& batch, InodeId dirId, const BincodeBytes& name, InodeId targetId, + TernError _createCurrentEdge( + TernTime logEntryTime, rocksdb::WriteBatch& batch, InodeId dirId, const BincodeBytes& name, InodeId targetId, // if locked=true, oldCreationTime will be used to check that we're locking the right edge. - bool locked, EggsTime oldCreationTime, - EggsTime& creationTime + bool locked, TernTime oldCreationTime, + TernTime& creationTime ) { ALWAYS_ASSERT(locked || oldCreationTime == 0); @@ -2090,8 +2090,8 @@ struct ShardDBImpl { uint64_t nameHash; { // allowSnaphsot=false since we cannot create current edges in snapshot directories. - EggsError err = _initiateDirectoryModificationAndHash(logEntryTime, false, batch, dirId, name.ref(), nameHash); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateDirectoryModificationAndHash(logEntryTime, false, batch, dirId, name.ref(), nameHash); + if (err != TernError::NO_ERROR) { return err; } } @@ -2122,7 +2122,7 @@ struct ShardDBImpl { auto k = ExternalValue::FromSlice(it->key()); if (k().dirId() == dirId && !k().current() && k().nameHash() == nameHash && k().name() == name.ref()) { if (k().creationTime() >= creationTime) { - return EggsError::MORE_RECENT_SNAPSHOT_EDGE; + return TernError::MORE_RECENT_SNAPSHOT_EDGE; } } } @@ -2134,16 +2134,16 @@ struct ShardDBImpl { // we have an existing locked edge, we need to make sure that it's the one we expect for // idempotency. if (!locked) { // the edge we're trying to create is not locked - return EggsError::NAME_IS_LOCKED; + return TernError::NAME_IS_LOCKED; } if (existingEdge().targetId() != targetId) { LOG_DEBUG(_env, "expecting target %s, got %s instead", existingEdge().targetId(), targetId); - return EggsError::MISMATCHING_TARGET; + return TernError::MISMATCHING_TARGET; } // we're not locking the right thing if (existingEdge().creationTime() != oldCreationTime) { LOG_DEBUG(_env, "expecting time %s, got %s instead", existingEdge().creationTime(), oldCreationTime); - return EggsError::MISMATCHING_CREATION_TIME; + return TernError::MISMATCHING_CREATION_TIME; } // The new creation time doesn't budge! creationTime = existingEdge().creationTime(); @@ -2152,12 +2152,12 @@ struct ShardDBImpl { // this automatically is if a file is overriding another file, which is also how it // works in linux/posix (see `man 2 rename`). if (existingEdge().creationTime() >= creationTime) { - return EggsError::MORE_RECENT_CURRENT_EDGE; + return TernError::MORE_RECENT_CURRENT_EDGE; } if ( targetId.type() == InodeType::DIRECTORY || existingEdge().targetIdWithLocked().id().type() == InodeType::DIRECTORY ) { - return EggsError::CANNOT_OVERRIDE_NAME; + return TernError::CANNOT_OVERRIDE_NAME; } // make what is now the current edge a snapshot edge -- no need to delete it, // it'll be overwritten below. @@ -2183,37 +2183,37 @@ struct ShardDBImpl { edgeBody().setCreationTime(creationTime); ROCKS_DB_CHECKED(batch.Put(_edgesCf, edgeKey.toSlice(), edgeBody.toSlice())); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applySameDirectoryRename(EggsTime time, rocksdb::WriteBatch& batch, const SameDirectoryRenameEntry& entry, SameDirectoryRenameResp& resp) { + TernError _applySameDirectoryRename(TernTime time, rocksdb::WriteBatch& batch, const SameDirectoryRenameEntry& entry, SameDirectoryRenameResp& resp) { // First, remove the old edge -- which won't be owned anymore, since we're renaming it. { - EggsError err = _softUnlinkCurrentEdge(time, batch, entry.dirId, entry.oldName, entry.oldCreationTime, entry.targetId, false); - if (err != EggsError::NO_ERROR) { + TernError err = _softUnlinkCurrentEdge(time, batch, entry.dirId, entry.oldName, entry.oldCreationTime, entry.targetId, false); + if (err != TernError::NO_ERROR) { return err; } } // Now, create the new one { - EggsError err = _createCurrentEdge(time, batch, entry.dirId, entry.newName, entry.targetId, false, 0, resp.newCreationTime); - if (err != EggsError::NO_ERROR) { + TernError err = _createCurrentEdge(time, batch, entry.dirId, entry.newName, entry.targetId, false, 0, resp.newCreationTime); + if (err != TernError::NO_ERROR) { return err; } } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applySameDirectoryRenameSnapshot(EggsTime time, rocksdb::WriteBatch& batch, const SameDirectoryRenameSnapshotEntry& entry, SameDirectoryRenameSnapshotResp& resp) { + TernError _applySameDirectoryRenameSnapshot(TernTime time, rocksdb::WriteBatch& batch, const SameDirectoryRenameSnapshotEntry& entry, SameDirectoryRenameSnapshotResp& resp) { // First, disown the snapshot edge. { // compute hash uint64_t nameHash; { // allowSnaphsot=false since we can't have owned edges in snapshot dirs - EggsError err = _initiateDirectoryModificationAndHash(time, false, batch, entry.dirId, entry.oldName.ref(), nameHash); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateDirectoryModificationAndHash(time, false, batch, entry.dirId, entry.oldName.ref(), nameHash); + if (err != TernError::NO_ERROR) { return err; } } @@ -2227,16 +2227,16 @@ struct ShardDBImpl { std::string edgeValue; auto status = _db->Get({}, _edgesCf, edgeKey.toSlice(), &edgeValue); if (status.IsNotFound()) { - return EggsError::EDGE_NOT_FOUND; + return TernError::EDGE_NOT_FOUND; } ROCKS_DB_CHECKED(status); ExternalValue edgeBody(edgeValue); if (edgeBody().targetIdWithOwned().id() != entry.targetId) { LOG_DEBUG(_env, "expecting target %s, but got %s", entry.targetId, edgeBody().targetIdWithOwned().id()); - return EggsError::MISMATCHING_TARGET; + return TernError::MISMATCHING_TARGET; } if (!edgeBody().targetIdWithOwned().extra()) { // owned - return EggsError::EDGE_NOT_OWNED; + return TernError::EDGE_NOT_OWNED; } // make the snapshot edge non-owned @@ -2255,22 +2255,22 @@ struct ShardDBImpl { // Now, create the new one { - EggsError err = _createCurrentEdge(time, batch, entry.dirId, entry.newName, entry.targetId, false, 0, resp.newCreationTime); - if (err != EggsError::NO_ERROR) { + TernError err = _createCurrentEdge(time, batch, entry.dirId, entry.newName, entry.targetId, false, 0, resp.newCreationTime); + if (err != TernError::NO_ERROR) { return err; } } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } // the creation time of the delete edge is always `time`. - EggsError _softUnlinkCurrentEdge(EggsTime time, rocksdb::WriteBatch& batch, InodeId dirId, const BincodeBytes& name, EggsTime creationTime, InodeId targetId, bool owned) { + TernError _softUnlinkCurrentEdge(TernTime time, rocksdb::WriteBatch& batch, InodeId dirId, const BincodeBytes& name, TernTime creationTime, InodeId targetId, bool owned) { // compute hash uint64_t nameHash; { // allowSnaphsot=false since we can't have current edges in snapshot dirs - EggsError err = _initiateDirectoryModificationAndHash(time, false, batch, dirId, name.ref(), nameHash); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateDirectoryModificationAndHash(time, false, batch, dirId, name.ref(), nameHash); + if (err != TernError::NO_ERROR) { return err; } } @@ -2283,20 +2283,20 @@ struct ShardDBImpl { std::string edgeValue; auto status = _db->Get({}, _edgesCf, edgeKey.toSlice(), &edgeValue); if (status.IsNotFound()) { - return EggsError::EDGE_NOT_FOUND; + return TernError::EDGE_NOT_FOUND; } ROCKS_DB_CHECKED(status); ExternalValue edgeBody(edgeValue); if (edgeBody().targetIdWithLocked().id() != targetId) { LOG_DEBUG(_env, "expecting target %s, but got %s", targetId, edgeBody().targetIdWithLocked().id()); - return EggsError::MISMATCHING_TARGET; + return TernError::MISMATCHING_TARGET; } if (edgeBody().creationTime() != creationTime) { LOG_DEBUG(_env, "expected time %s, got %s", edgeBody().creationTime(), creationTime); - return EggsError::MISMATCHING_CREATION_TIME; + return TernError::MISMATCHING_CREATION_TIME; } if (edgeBody().targetIdWithLocked().extra()) { // locked - return EggsError::EDGE_IS_LOCKED; + return TernError::EDGE_IS_LOCKED; } // delete the current edge @@ -2319,17 +2319,17 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(batch.Put(_edgesCf, k.toSlice(), v.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applySoftUnlinkFile(EggsTime time, rocksdb::WriteBatch& batch, const SoftUnlinkFileEntry& entry, SoftUnlinkFileResp& resp) { - EggsError err = _softUnlinkCurrentEdge(time, batch, entry.ownerId, entry.name, entry.creationTime, entry.fileId, true); - if (err != EggsError::NO_ERROR) { return err; } + TernError _applySoftUnlinkFile(TernTime time, rocksdb::WriteBatch& batch, const SoftUnlinkFileEntry& entry, SoftUnlinkFileResp& resp) { + TernError err = _softUnlinkCurrentEdge(time, batch, entry.ownerId, entry.name, entry.creationTime, entry.fileId, true); + if (err != TernError::NO_ERROR) { return err; } resp.deleteCreationTime = time; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyCreateDirectoryInode(EggsTime time, rocksdb::WriteBatch& batch, const CreateDirectoryInodeEntry& entry, CreateDirectoryInodeResp& resp) { + TernError _applyCreateDirectoryInode(TernTime time, rocksdb::WriteBatch& batch, const CreateDirectoryInodeEntry& entry, CreateDirectoryInodeResp& resp) { // The assumption here is that only the CDC creates directories, and it doles out // inode ids per transaction, so that you'll never get competing creates here, but // we still check that the parent makes sense. @@ -2337,14 +2337,14 @@ struct ShardDBImpl { std::string dirValue; ExternalValue dir; // we never create directories as snapshot - EggsError err = _getDirectory({}, entry.id, false, dirValue, dir); - if (err == EggsError::NO_ERROR) { + TernError err = _getDirectory({}, entry.id, false, dirValue, dir); + if (err == TernError::NO_ERROR) { if (dir().ownerId() != entry.ownerId) { - return EggsError::MISMATCHING_OWNER; + return TernError::MISMATCHING_OWNER; } else { - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - } else if (err == EggsError::DIRECTORY_NOT_FOUND) { + } else if (err == TernError::DIRECTORY_NOT_FOUND) { // we continue } else { return err; @@ -2363,25 +2363,25 @@ struct ShardDBImpl { resp.mtime = time; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyCreateLockedCurrentEdge(EggsTime time, rocksdb::WriteBatch& batch, const CreateLockedCurrentEdgeEntry& entry, CreateLockedCurrentEdgeResp& resp) { + TernError _applyCreateLockedCurrentEdge(TernTime time, rocksdb::WriteBatch& batch, const CreateLockedCurrentEdgeEntry& entry, CreateLockedCurrentEdgeResp& resp) { auto err = _createCurrentEdge(time, batch, entry.dirId, entry.name, entry.targetId, true, entry.oldCreationTime, resp.creationTime); // locked=true - if (err != EggsError::NO_ERROR) { + if (err != TernError::NO_ERROR) { return err; } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyUnlockCurrentEdge(EggsTime time, rocksdb::WriteBatch& batch, const UnlockCurrentEdgeEntry& entry, UnlockCurrentEdgeResp& resp) { + TernError _applyUnlockCurrentEdge(TernTime time, rocksdb::WriteBatch& batch, const UnlockCurrentEdgeEntry& entry, UnlockCurrentEdgeResp& resp) { uint64_t nameHash; { std::string dirValue; ExternalValue dir; // allowSnaphsot=false since no current edges in snapshot dirs - EggsError err = _initiateDirectoryModification(time, false, batch, entry.dirId, dirValue, dir); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateDirectoryModification(time, false, batch, entry.dirId, dirValue, dir); + if (err != TernError::NO_ERROR) { return err; } nameHash = EdgeKey::computeNameHash(dir().hashMode(), entry.name.ref()); @@ -2395,14 +2395,14 @@ struct ShardDBImpl { { auto status = _db->Get({}, _edgesCf, currentKey.toSlice(), &edgeValue); if (status.IsNotFound()) { - return EggsError::EDGE_NOT_FOUND; + return TernError::EDGE_NOT_FOUND; } ROCKS_DB_CHECKED(status); } ExternalValue edge(edgeValue); if (edge().creationTime() != entry.creationTime) { LOG_DEBUG(_env, "expected time %s, got %s", edge().creationTime(), entry.creationTime); - return EggsError::MISMATCHING_CREATION_TIME; + return TernError::MISMATCHING_CREATION_TIME; } if (edge().locked()) { edge().setTargetIdWithLocked(InodeIdExtra(entry.targetId, false)); // locked=false @@ -2426,18 +2426,18 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(batch.Put(_edgesCf, snapshotKey.toSlice(), snapshotBody.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyLockCurrentEdge(EggsTime time, rocksdb::WriteBatch& batch, const LockCurrentEdgeEntry& entry, LockCurrentEdgeResp& resp) { + TernError _applyLockCurrentEdge(TernTime time, rocksdb::WriteBatch& batch, const LockCurrentEdgeEntry& entry, LockCurrentEdgeResp& resp) { // TODO lots of duplication with _applyUnlockCurrentEdge uint64_t nameHash; { std::string dirValue; ExternalValue dir; // allowSnaphsot=false since no current edges in snapshot dirs - EggsError err = _initiateDirectoryModification(time, false, batch, entry.dirId, dirValue, dir); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateDirectoryModification(time, false, batch, entry.dirId, dirValue, dir); + if (err != TernError::NO_ERROR) { return err; } nameHash = EdgeKey::computeNameHash(dir().hashMode(), entry.name.ref()); @@ -2451,34 +2451,34 @@ struct ShardDBImpl { { auto status = _db->Get({}, _edgesCf, currentKey.toSlice(), &edgeValue); if (status.IsNotFound()) { - return EggsError::EDGE_NOT_FOUND; + return TernError::EDGE_NOT_FOUND; } ROCKS_DB_CHECKED(status); } ExternalValue edge(edgeValue); if (edge().creationTime() != entry.creationTime) { LOG_DEBUG(_env, "expected time %s, got %s", edge().creationTime(), entry.creationTime); - return EggsError::MISMATCHING_CREATION_TIME; + return TernError::MISMATCHING_CREATION_TIME; } if (!edge().locked()) { edge().setTargetIdWithLocked({entry.targetId, true}); // locked=true ROCKS_DB_CHECKED(batch.Put(_edgesCf, currentKey.toSlice(), edge.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyRemoveDirectoryOwner(EggsTime time, rocksdb::WriteBatch& batch, const RemoveDirectoryOwnerEntry& entry, RemoveDirectoryOwnerResp& resp) { + TernError _applyRemoveDirectoryOwner(TernTime time, rocksdb::WriteBatch& batch, const RemoveDirectoryOwnerEntry& entry, RemoveDirectoryOwnerResp& resp) { std::string dirValue; ExternalValue dir; { // allowSnapshot=true for idempotency (see below) - EggsError err = _initiateDirectoryModification(time, true, batch, entry.dirId, dirValue, dir); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateDirectoryModification(time, true, batch, entry.dirId, dirValue, dir); + if (err != TernError::NO_ERROR) { return err; } if (dir().ownerId() == NULL_INODE_ID) { - return EggsError::NO_ERROR; // already done + return TernError::NO_ERROR; // already done } } @@ -2494,7 +2494,7 @@ struct ShardDBImpl { if (it->Valid()) { auto otherEdge = ExternalValue::FromSlice(it->key()); if (otherEdge().dirId() == entry.dirId && otherEdge().current()) { - return EggsError::DIRECTORY_NOT_EMPTY; + return TernError::DIRECTORY_NOT_EMPTY; } } else if (it->status().IsNotFound()) { // nothing to do @@ -2514,25 +2514,25 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(batch.Put(_directoriesCf, k.toSlice(), newDir.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyRemoveDirectoryInode(EggsTime time, rocksdb::WriteBatch& batch, const RemoveInodeEntry& entry, RemoveInodeResp& resp) { + TernError _applyRemoveDirectoryInode(TernTime time, rocksdb::WriteBatch& batch, const RemoveInodeEntry& entry, RemoveInodeResp& resp) { ALWAYS_ASSERT(entry.id.type() == InodeType::DIRECTORY); std::string dirValue; ExternalValue dir; { - EggsError err = _initiateDirectoryModification(time, true, batch, entry.id, dirValue, dir); - if (err == EggsError::DIRECTORY_NOT_FOUND) { - return EggsError::NO_ERROR; // we're already done + TernError err = _initiateDirectoryModification(time, true, batch, entry.id, dirValue, dir); + if (err == TernError::DIRECTORY_NOT_FOUND) { + return TernError::NO_ERROR; // we're already done } - if (err != EggsError::NO_ERROR) { + if (err != TernError::NO_ERROR) { return err; } } if (dir().ownerId() != NULL_INODE_ID) { - return EggsError::DIRECTORY_HAS_OWNER; + return TernError::DIRECTORY_HAS_OWNER; } // there can't be any outgoing edges when killing a directory definitively { @@ -2548,7 +2548,7 @@ struct ShardDBImpl { auto otherEdge = ExternalValue::FromSlice(it->key()); if (otherEdge().dirId() == entry.id) { LOG_DEBUG(_env, "found edge %s when trying to remove directory %s", otherEdge(), entry.id); - return EggsError::DIRECTORY_NOT_EMPTY; + return TernError::DIRECTORY_NOT_EMPTY; } } else if (it->status().IsNotFound()) { // nothing to do @@ -2562,10 +2562,10 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(batch.Delete(_directoriesCf, dirKey.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyRemoveFileInode(EggsTime time, rocksdb::WriteBatch& batch, const RemoveInodeEntry& entry, RemoveInodeResp& resp) { + TernError _applyRemoveFileInode(TernTime time, rocksdb::WriteBatch& batch, const RemoveInodeEntry& entry, RemoveInodeResp& resp) { ALWAYS_ASSERT(entry.id.type() == InodeType::FILE || entry.id.type() == InodeType::SYMLINK); // we demand for the file to be transient, for the deadline to have passed, and for it to have @@ -2573,29 +2573,29 @@ struct ShardDBImpl { { std::string transientFileValue; ExternalValue transientFile; - EggsError err = _getTransientFile({}, time, true /*allowPastDeadline*/, entry.id, transientFileValue, transientFile); - if (err == EggsError::FILE_NOT_FOUND) { + TernError err = _getTransientFile({}, time, true /*allowPastDeadline*/, entry.id, transientFileValue, transientFile); + if (err == TernError::FILE_NOT_FOUND) { std::string fileValue; ExternalValue file; - EggsError err = _getFile({}, entry.id, fileValue, file); - if (err == EggsError::NO_ERROR) { - return EggsError::FILE_IS_NOT_TRANSIENT; - } else if (err == EggsError::FILE_NOT_FOUND) { + TernError err = _getFile({}, entry.id, fileValue, file); + if (err == TernError::NO_ERROR) { + return TernError::FILE_IS_NOT_TRANSIENT; + } else if (err == TernError::FILE_NOT_FOUND) { // In this case the inode is just gone. The best thing to do is // to just be OK with it, since we need to handle repeated calls // nicely. - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } else { return err; } - } else if (err == EggsError::NO_ERROR) { + } else if (err == TernError::NO_ERROR) { // keep going } else { return err; } // check deadline if (transientFile().deadline() >= time) { - return EggsError::DEADLINE_NOT_PASSED; + return TernError::DEADLINE_NOT_PASSED; } // check no spans { @@ -2608,7 +2608,7 @@ struct ShardDBImpl { if (it->Valid()) { auto otherSpan = ExternalValue::FromSlice(it->key()); if (otherSpan().fileId() == entry.id) { - return EggsError::FILE_NOT_EMPTY; + return TernError::FILE_NOT_EMPTY; } } else { ROCKS_DB_CHECKED(it->status()); @@ -2620,10 +2620,10 @@ struct ShardDBImpl { auto fileKey = InodeIdKey::Static(entry.id); ROCKS_DB_CHECKED(batch.Delete(_transientCf, fileKey.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyRemoveInode(EggsTime time, rocksdb::WriteBatch& batch, const RemoveInodeEntry& entry, RemoveInodeResp& resp) { + TernError _applyRemoveInode(TernTime time, rocksdb::WriteBatch& batch, const RemoveInodeEntry& entry, RemoveInodeResp& resp) { if (entry.id.type() == InodeType::DIRECTORY) { return _applyRemoveDirectoryInode(time, batch, entry, resp); } else { @@ -2631,12 +2631,12 @@ struct ShardDBImpl { } } - EggsError _applySetDirectoryOwner(EggsTime time, rocksdb::WriteBatch& batch, const SetDirectoryOwnerEntry& entry, SetDirectoryOwnerResp& resp) { + TernError _applySetDirectoryOwner(TernTime time, rocksdb::WriteBatch& batch, const SetDirectoryOwnerEntry& entry, SetDirectoryOwnerResp& resp) { std::string dirValue; ExternalValue dir; { - EggsError err = _initiateDirectoryModification(time, true, batch, entry.dirId, dirValue, dir); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateDirectoryModification(time, true, batch, entry.dirId, dirValue, dir); + if (err != TernError::NO_ERROR) { return err; } } @@ -2648,17 +2648,17 @@ struct ShardDBImpl { auto k = InodeIdKey::Static(entry.dirId); ROCKS_DB_CHECKED(batch.Put(_directoriesCf, k.toSlice(), dir.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applySetDirectoryInfo(EggsTime time, rocksdb::WriteBatch& batch, const SetDirectoryInfoEntry& entry, SetDirectoryInfoResp& resp) { + TernError _applySetDirectoryInfo(TernTime time, rocksdb::WriteBatch& batch, const SetDirectoryInfoEntry& entry, SetDirectoryInfoResp& resp) { std::string dirValue; ExternalValue dir; { // allowSnapshot=true since we might want to influence deletion policies for already deleted // directories. - EggsError err = _initiateDirectoryModification(time, true, batch, entry.dirId, dirValue, dir); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateDirectoryModification(time, true, batch, entry.dirId, dirValue, dir); + if (err != TernError::NO_ERROR) { return err; } } @@ -2673,15 +2673,15 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(batch.Put(_directoriesCf, k.toSlice(), newDir.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyRemoveNonOwnedEdge(EggsTime time, rocksdb::WriteBatch& batch, const RemoveNonOwnedEdgeEntry& entry, RemoveNonOwnedEdgeResp& resp) { + TernError _applyRemoveNonOwnedEdge(TernTime time, rocksdb::WriteBatch& batch, const RemoveNonOwnedEdgeEntry& entry, RemoveNonOwnedEdgeResp& resp) { uint64_t nameHash; { // allowSnapshot=true since GC needs to be able to remove non-owned edges from snapshot dir - EggsError err = _initiateDirectoryModificationAndHash(time, true, batch, entry.dirId, entry.name.ref(), nameHash); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateDirectoryModificationAndHash(time, true, batch, entry.dirId, entry.name.ref(), nameHash); + if (err != TernError::NO_ERROR) { return err; } } @@ -2696,40 +2696,40 @@ struct ShardDBImpl { std::string edgeValue; auto status = _db->Get({}, _edgesCf, k.toSlice(), &edgeValue); if (status.IsNotFound()) { - return EggsError::NO_ERROR; // make the client's life easier + return TernError::NO_ERROR; // make the client's life easier } ROCKS_DB_CHECKED(status); ExternalValue edge(edgeValue); if (edge().targetIdWithOwned().extra()) { // TODO better error here? - return EggsError::EDGE_NOT_FOUND; // unexpectedly owned + return TernError::EDGE_NOT_FOUND; // unexpectedly owned } // we can go ahead and safely delete ROCKS_DB_CHECKED(batch.Delete(_edgesCf, k.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applySameShardHardFileUnlink(EggsTime time, rocksdb::WriteBatch& batch, const SameShardHardFileUnlinkEntry& entry, SameShardHardFileUnlinkResp& resp) { + TernError _applySameShardHardFileUnlink(TernTime time, rocksdb::WriteBatch& batch, const SameShardHardFileUnlinkEntry& entry, SameShardHardFileUnlinkResp& resp) { // fetch the file std::string fileValue; ExternalValue file; { - EggsError err = _getFile({}, entry.targetId, fileValue, file); - if (err == EggsError::FILE_NOT_FOUND) { + TernError err = _getFile({}, entry.targetId, fileValue, file); + if (err == TernError::FILE_NOT_FOUND) { // if the file is already transient, we're done std::string transientFileValue; ExternalValue transientFile; - EggsError err = _getTransientFile({}, time, true, entry.targetId, fileValue, transientFile); - if (err == EggsError::NO_ERROR) { - return EggsError::NO_ERROR; - } else if (err == EggsError::FILE_NOT_FOUND) { - return EggsError::FILE_NOT_FOUND; + TernError err = _getTransientFile({}, time, true, entry.targetId, fileValue, transientFile); + if (err == TernError::NO_ERROR) { + return TernError::NO_ERROR; + } else if (err == TernError::FILE_NOT_FOUND) { + return TernError::FILE_NOT_FOUND; } else { return err; } - } else if (err != EggsError::NO_ERROR) { + } else if (err != TernError::NO_ERROR) { return err; } } @@ -2740,7 +2740,7 @@ struct ShardDBImpl { std::string dirValue; ExternalValue dir; // allowSnapshot=true since GC needs to be able to do this in snapshot dirs - EggsError err = _initiateDirectoryModification(time, true, batch, entry.ownerId, dirValue, dir); + TernError err = _initiateDirectoryModification(time, true, batch, entry.ownerId, dirValue, dir); nameHash = EdgeKey::computeNameHash(dir().hashMode(), entry.name.ref()); } @@ -2757,12 +2757,12 @@ struct ShardDBImpl { std::string edgeValue; auto status = _db->Get({}, _edgesCf, k.toSlice(), &edgeValue); if (status.IsNotFound()) { - return EggsError::EDGE_NOT_FOUND; // can't return EggsError::NO_ERROR, since the transient file still exists + return TernError::EDGE_NOT_FOUND; // can't return TernError::NO_ERROR, since the transient file still exists } ROCKS_DB_CHECKED(status); ExternalValue edge(edgeValue); if (!edge().targetIdWithOwned().extra()) { // not owned - return EggsError::EDGE_NOT_FOUND; + return TernError::EDGE_NOT_FOUND; } // we can proceed ROCKS_DB_CHECKED(batch.Delete(_edgesCf, k.toSlice())); @@ -2782,15 +2782,15 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(batch.Put(_transientCf, k.toSlice(), v.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyRemoveSpanInitiate(EggsTime time, rocksdb::WriteBatch& batch, const RemoveSpanInitiateEntry& entry, RemoveSpanInitiateResp& resp) { + TernError _applyRemoveSpanInitiate(TernTime time, rocksdb::WriteBatch& batch, const RemoveSpanInitiateEntry& entry, RemoveSpanInitiateResp& resp) { std::string fileValue; ExternalValue file; { - EggsError err = _initiateTransientFileModification(time, true, batch, entry.fileId, fileValue, file); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateTransientFileModification(time, true, batch, entry.fileId, fileValue, file); + if (err != TernError::NO_ERROR) { return err; } } @@ -2800,7 +2800,7 @@ struct ShardDBImpl { // making sure there are no spans. if (file().fileSize() == 0) { LOG_DEBUG(_env, "exiting early from remove span since file is empty"); - return EggsError::FILE_EMPTY; + return TernError::FILE_EMPTY; } LOG_DEBUG(_env, "deleting span from file %s of size %s", entry.fileId, file().fileSize()); @@ -2831,7 +2831,7 @@ struct ShardDBImpl { auto k = InodeIdKey::Static(entry.fileId); ROCKS_DB_CHECKED(batch.Put(_transientCf, k.toSlice(), file.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } @@ -2863,7 +2863,7 @@ struct ShardDBImpl { } } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } uint64_t _getNextBlockId() { @@ -2872,7 +2872,7 @@ struct ShardDBImpl { return ExternalValue(v)().u64(); } - uint64_t _updateNextBlockId(EggsTime time, uint64_t& nextBlockId) { + uint64_t _updateNextBlockId(TernTime time, uint64_t& nextBlockId) { // time is embedded into the id, other than LSB which is shard nextBlockId = std::max(nextBlockId + 0x100, _shid.u8 | (time.ns & ~0xFFull)); return nextBlockId; @@ -2909,19 +2909,19 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(batch.Merge(_blockServicesToFilesCf, k.toSlice(), v.toSlice())); } - EggsError _applyAddInlineSpan(EggsTime time, rocksdb::WriteBatch& batch, const AddInlineSpanEntry& entry, AddInlineSpanResp& resp) { + TernError _applyAddInlineSpan(TernTime time, rocksdb::WriteBatch& batch, const AddInlineSpanEntry& entry, AddInlineSpanResp& resp) { std::string fileValue; ExternalValue file; { - EggsError err = _initiateTransientFileModification(time, false, batch, entry.fileId, fileValue, file); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateTransientFileModification(time, false, batch, entry.fileId, fileValue, file); + if (err != TernError::NO_ERROR) { return err; } } // Special case -- for empty spans we have nothing to do if (entry.body.size() == 0) { - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } StaticValue spanKey; @@ -2944,7 +2944,7 @@ struct ShardDBImpl { auto status = _db->Get({}, _spansCf, spanKey.toSlice(), &spanValue); if (status.IsNotFound()) { LOG_DEBUG(_env, "file size does not match, but could not find existing span"); - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } ROCKS_DB_CHECKED(status); ExternalValue existingSpan(spanValue); @@ -2955,17 +2955,17 @@ struct ShardDBImpl { existingSpan().inlineBody() != entry.body ) { LOG_DEBUG(_env, "file size does not match, and existing span does not match"); - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } LOG_DEBUG(_env, "expecting file size %s, but got %s, returning span not found", entry.byteOffset, file().fileSize()); - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } // We're actually adding a new span -- the span state must be clean. if (file().lastSpanState() != SpanState::CLEAN) { - return EggsError::LAST_SPAN_STATE_NOT_CLEAN; + return TernError::LAST_SPAN_STATE_NOT_CLEAN; } // Update the file with the new file size, no need to set the thing to dirty since it's inline @@ -2983,16 +2983,16 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(batch.Put(_spansCf, spanKey.toSlice(), spanBody.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyAddSpanInitiate(EggsTime time, rocksdb::WriteBatch& batch, const AddSpanAtLocationInitiateEntry& entry, AddSpanAtLocationInitiateResp& resp) { + TernError _applyAddSpanInitiate(TernTime time, rocksdb::WriteBatch& batch, const AddSpanAtLocationInitiateEntry& entry, AddSpanAtLocationInitiateResp& resp) { std::string fileValue; ExternalValue file; { - EggsError err = _initiateTransientFileModification(time, false, batch, entry.fileId, fileValue, file); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateTransientFileModification(time, false, batch, entry.fileId, fileValue, file); + if (err != TernError::NO_ERROR) { return err; } } @@ -3017,7 +3017,7 @@ struct ShardDBImpl { auto status = _db->Get({}, _spansCf, spanKey.toSlice(), &spanValue); if (status.IsNotFound()) { LOG_DEBUG(_env, "file size does not match, but could not find existing span"); - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } ROCKS_DB_CHECKED(status); ExternalValue existingSpan(spanValue); @@ -3032,18 +3032,18 @@ struct ShardDBImpl { existingSpan().blocksBodyReadOnly(0).location() != entry.locationId ) { LOG_DEBUG(_env, "file size does not match, and existing span does not match"); - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } _fillInAddSpanInitiate(existingSpan().blocksBodyReadOnly(0), resp.resp); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } LOG_DEBUG(_env, "expecting file size %s, but got %s, returning span not found", entry.byteOffset, file().fileSize()); - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } // We're actually adding a new span -- the span state must be clean. if (file().lastSpanState() != SpanState::CLEAN) { - return EggsError::LAST_SPAN_STATE_NOT_CLEAN; + return TernError::LAST_SPAN_STATE_NOT_CLEAN; } // Update the file with the new file size and set the last span state to dirty @@ -3082,7 +3082,7 @@ struct ShardDBImpl { // Fill in the response _fillInAddSpanInitiate(spanBody().blocksBodyReadOnly(0), resp.resp); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } std::array _blockWriteCertificate(uint32_t blockSize, const BlockBody block, const AES128Key& secretKey) { @@ -3145,12 +3145,12 @@ struct ShardDBImpl { return good; } - EggsError _applyAddSpanCertify(EggsTime time, rocksdb::WriteBatch& batch, const AddSpanCertifyEntry& entry, AddSpanCertifyResp& resp) { + TernError _applyAddSpanCertify(TernTime time, rocksdb::WriteBatch& batch, const AddSpanCertifyEntry& entry, AddSpanCertifyResp& resp) { std::string fileValue; ExternalValue file; { - EggsError err = _initiateTransientFileModification(time, false, batch, entry.fileId, fileValue, file); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateTransientFileModification(time, false, batch, entry.fileId, fileValue, file); + if (err != TernError::NO_ERROR) { return err; } } @@ -3165,36 +3165,36 @@ struct ShardDBImpl { std::string spanValue; auto status = _db->Get({}, _spansCf, spanKey.toSlice(), &spanValue); if (status.IsNotFound()) { - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } ROCKS_DB_CHECKED(status); ExternalValue span(spanValue); // "Is the span still there" if (file().fileSize() > entry.byteOffset+span().spanSize()) { - return EggsError::NO_ERROR; // already certified (we're past it) + return TernError::NO_ERROR; // already certified (we're past it) } if (file().lastSpanState() == SpanState::CLEAN) { - return EggsError::NO_ERROR; // already certified + return TernError::NO_ERROR; // already certified } if (file().lastSpanState() == SpanState::CONDEMNED) { - return EggsError::SPAN_NOT_FOUND; // we could probably have a better error here + return TernError::SPAN_NOT_FOUND; // we could probably have a better error here } ALWAYS_ASSERT(file().lastSpanState() == SpanState::DIRTY); // Now verify the proofs if (span().isInlineStorage()) { - return EggsError::CANNOT_CERTIFY_BLOCKLESS_SPAN; + return TernError::CANNOT_CERTIFY_BLOCKLESS_SPAN; } ALWAYS_ASSERT(span().locationCount() == 1); auto blocks = span().blocksBodyReadOnly(0); if (blocks.parity().blocks() != entry.proofs.els.size()) { - return EggsError::BAD_NUMBER_OF_BLOCKS_PROOFS; + return TernError::BAD_NUMBER_OF_BLOCKS_PROOFS; } auto inMemoryBlockServiceData = _blockServicesCache.getCache(); BlockBody block; for (int i = 0; i < blocks.parity().blocks(); i++) { auto block = blocks.block(i); if (!_checkBlockAddProof(inMemoryBlockServiceData, block.blockService(), entry.proofs.els[i])) { - return EggsError::BAD_BLOCK_PROOF; + return TernError::BAD_BLOCK_PROOF; } } } @@ -3207,15 +3207,15 @@ struct ShardDBImpl { } // We're done. - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyAddSpanLocation(EggsTime time, rocksdb::WriteBatch& batch, const AddSpanLocationEntry& entry, AddSpanLocationResp& resp) { + TernError _applyAddSpanLocation(TernTime time, rocksdb::WriteBatch& batch, const AddSpanLocationEntry& entry, AddSpanLocationResp& resp) { std::string destinationFileValue; ExternalValue destinationFile; { - EggsError err = _getFile({}, entry.fileId2, destinationFileValue, destinationFile); - if (err != EggsError::NO_ERROR) { + TernError err = _getFile({}, entry.fileId2, destinationFileValue, destinationFile); + if (err != TernError::NO_ERROR) { return err; } } @@ -3223,23 +3223,23 @@ struct ShardDBImpl { std::string sourceFileValue; ExternalValue sourceFile; { - EggsError err = _initiateTransientFileModification(time, false, batch, entry.fileId1, sourceFileValue, sourceFile); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateTransientFileModification(time, false, batch, entry.fileId1, sourceFileValue, sourceFile); + if (err != TernError::NO_ERROR) { return err; } } if (sourceFile().lastSpanState() != SpanState::CLEAN) { - return EggsError::LAST_SPAN_STATE_NOT_CLEAN; + return TernError::LAST_SPAN_STATE_NOT_CLEAN; } StaticValue destinationSpanKey; std::string destinationSpanValue; ExternalValue destinationSpan; if (!_fetchSpan(entry.fileId2, entry.byteOffset2, destinationSpanKey, destinationSpanValue, destinationSpan)) { - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } if (destinationSpan().isInlineStorage()) { - return EggsError::ADD_SPAN_LOCATION_INLINE_STORAGE; + return TernError::ADD_SPAN_LOCATION_INLINE_STORAGE; } @@ -3251,33 +3251,33 @@ struct ShardDBImpl { uint8_t locIdx = destinationSpan().findBlocksLocIdx(entry.blocks1.els); if (locIdx != SpanBody::INVALID_LOCATION_IDX) { // the blocks are already there return no error for idempotency - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } if (sourceSpan().isInlineStorage()) { - return EggsError::SWAP_SPANS_INLINE_STORAGE; + return TernError::SWAP_SPANS_INLINE_STORAGE; } // check that size and crc is the same if (sourceSpan().spanSize() != destinationSpan().spanSize()) { - return EggsError::ADD_SPAN_LOCATION_MISMATCHING_SIZE; + return TernError::ADD_SPAN_LOCATION_MISMATCHING_SIZE; } if (sourceSpan().crc() != destinationSpan().crc()) { - return EggsError::ADD_SPAN_LOCATION_MISMATCHING_CRC; + return TernError::ADD_SPAN_LOCATION_MISMATCHING_CRC; } // Fetch span state auto state1 = _fetchSpanState(time, entry.fileId1, entry.byteOffset1 + sourceSpan().size()); if (state1 != SpanState::CLEAN) { - return EggsError::ADD_SPAN_LOCATION_NOT_CLEAN; + return TernError::ADD_SPAN_LOCATION_NOT_CLEAN; } // we should only be adding one location if (sourceSpan().locationCount() != 1) { - return EggsError::TRANSIENT_LOCATION_COUNT; + return TernError::TRANSIENT_LOCATION_COUNT; } auto blocksSource = sourceSpan().blocksBodyReadOnly(0); @@ -3288,7 +3288,7 @@ struct ShardDBImpl { if (blocksDestination.location() != blocksSource.location()) { continue; } - return EggsError::ADD_SPAN_LOCATION_EXISTS; + return TernError::ADD_SPAN_LOCATION_EXISTS; } // we're ready to move location, first do the blocks bookkeeping @@ -3313,24 +3313,24 @@ struct ShardDBImpl { // change size and dirtiness - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyMakeFileTransient(EggsTime time, rocksdb::WriteBatch& batch, const MakeFileTransientEntry& entry, MakeFileTransientResp& resp) { + TernError _applyMakeFileTransient(TernTime time, rocksdb::WriteBatch& batch, const MakeFileTransientEntry& entry, MakeFileTransientResp& resp) { std::string fileValue; ExternalValue file; { - EggsError err = _getFile({}, entry.id, fileValue, file); - if (err == EggsError::FILE_NOT_FOUND) { + TernError err = _getFile({}, entry.id, fileValue, file); + if (err == TernError::FILE_NOT_FOUND) { // if it's already transient, we're done std::string transientFileValue; ExternalValue transientFile; - EggsError err = _getTransientFile({}, time, true, entry.id, transientFileValue, transientFile); - if (err == EggsError::NO_ERROR) { - return EggsError::NO_ERROR; + TernError err = _getTransientFile({}, time, true, entry.id, transientFileValue, transientFile); + if (err == TernError::NO_ERROR) { + return TernError::NO_ERROR; } } - if (err != EggsError::NO_ERROR) { + if (err != TernError::NO_ERROR) { return err; } } @@ -3349,14 +3349,14 @@ struct ShardDBImpl { transientFile().setNoteDangerous(entry.note.ref()); ROCKS_DB_CHECKED(batch.Put(_transientCf, k.toSlice(), transientFile.toSlice())); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyScrapTransientFile(EggsTime time, rocksdb::WriteBatch& batch, const ScrapTransientFileEntry& entry, ScrapTransientFileResp& resp) { + TernError _applyScrapTransientFile(TernTime time, rocksdb::WriteBatch& batch, const ScrapTransientFileEntry& entry, ScrapTransientFileResp& resp) { std::string transientValue; ExternalValue transientBody; - EggsError err = _getTransientFile({}, time, true, entry.id, transientValue, transientBody); - if (err != EggsError::NO_ERROR) { + TernError err = _getTransientFile({}, time, true, entry.id, transientValue, transientBody); + if (err != TernError::NO_ERROR) { return err; } @@ -3365,15 +3365,15 @@ struct ShardDBImpl { auto k = InodeIdKey::Static(entry.id); ROCKS_DB_CHECKED(batch.Put(_transientCf, k.toSlice(), transientBody.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyRemoveSpanCertify(EggsTime time, rocksdb::WriteBatch& batch, const RemoveSpanCertifyEntry& entry, RemoveSpanCertifyResp& resp) { + TernError _applyRemoveSpanCertify(TernTime time, rocksdb::WriteBatch& batch, const RemoveSpanCertifyEntry& entry, RemoveSpanCertifyResp& resp) { std::string fileValue; ExternalValue file; { - EggsError err = _initiateTransientFileModification(time, true, batch, entry.fileId, fileValue, file); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateTransientFileModification(time, true, batch, entry.fileId, fileValue, file); + if (err != TernError::NO_ERROR) { return err; } } @@ -3388,19 +3388,19 @@ struct ShardDBImpl { auto status = _db->Get({}, _spansCf, spanKey.toSlice(), &spanValue); if (status.IsNotFound()) { LOG_DEBUG(_env, "skipping removal of span for file %s, offset %s, since we're already done", entry.fileId, entry.byteOffset); - return EggsError::NO_ERROR; // already done + return TernError::NO_ERROR; // already done } ROCKS_DB_CHECKED(status); span = ExternalValue(spanValue); } if (span().isInlineStorage()) { - return EggsError::CANNOT_CERTIFY_BLOCKLESS_SPAN; + return TernError::CANNOT_CERTIFY_BLOCKLESS_SPAN; } // Make sure we're condemned if (file().lastSpanState() != SpanState::CONDEMNED) { - return EggsError::SPAN_NOT_FOUND; // TODO maybe better error? + return TernError::SPAN_NOT_FOUND; // TODO maybe better error? } // Verify proofs @@ -3408,7 +3408,7 @@ struct ShardDBImpl { for (uint8_t i = 0; i < span().locationCount(); ++i) { auto blocks = span().blocksBodyReadOnly(i); if (entry.proofs.els.size() - entryBlockIdx < blocks.parity().blocks()) { - return EggsError::BAD_NUMBER_OF_BLOCKS_PROOFS; + return TernError::BAD_NUMBER_OF_BLOCKS_PROOFS; } { auto inMemoryBlockServiceData = _blockServicesCache.getCache(); @@ -3417,10 +3417,10 @@ struct ShardDBImpl { const auto& proof = entry.proofs.els[entryBlockIdx++]; if (block.blockId() != proof.blockId) { RAISE_ALERT_APP_TYPE(_env, XmonAppType::DAYTIME, "bad block proof id for file %s, expected %s, got %s", entry.fileId, block.blockId(), proof.blockId); - return EggsError::BAD_BLOCK_PROOF; + return TernError::BAD_BLOCK_PROOF; } if (!_checkBlockDeleteProof(inMemoryBlockServiceData, entry.fileId, block.blockService(), proof)) { - return EggsError::BAD_BLOCK_PROOF; + return TernError::BAD_BLOCK_PROOF; } // record balance change in block service to files _addBlockServicesToFiles(batch, block.blockService(), entry.fileId, -1); @@ -3428,7 +3428,7 @@ struct ShardDBImpl { } } if (entryBlockIdx != entry.proofs.els.size()) { - return EggsError::BAD_NUMBER_OF_BLOCKS_PROOFS; + return TernError::BAD_NUMBER_OF_BLOCKS_PROOFS; } // Delete span, set new size, and go back to clean state @@ -3441,15 +3441,15 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(batch.Put(_transientCf, k.toSlice(), file.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyRemoveOwnedSnapshotFileEdge(EggsTime time, rocksdb::WriteBatch& batch, const RemoveOwnedSnapshotFileEdgeEntry& entry, RemoveOwnedSnapshotFileEdgeResp& resp) { + TernError _applyRemoveOwnedSnapshotFileEdge(TernTime time, rocksdb::WriteBatch& batch, const RemoveOwnedSnapshotFileEdgeEntry& entry, RemoveOwnedSnapshotFileEdgeResp& resp) { uint64_t nameHash; { // the GC needs to work on deleted dirs who might still have owned files, so allowSnapshot=true - EggsError err = _initiateDirectoryModificationAndHash(time, true, batch, entry.ownerId, entry.name.ref(), nameHash); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateDirectoryModificationAndHash(time, true, batch, entry.ownerId, entry.name.ref(), nameHash); + if (err != TernError::NO_ERROR) { return err; } } @@ -3463,7 +3463,7 @@ struct ShardDBImpl { ROCKS_DB_CHECKED(batch.Delete(_edgesCf, edgeKey.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } bool _fetchSpan(InodeId fileId, uint64_t byteOffset, StaticValue& spanKey, std::string& spanValue, ExternalValue& span) { @@ -3479,19 +3479,19 @@ struct ShardDBImpl { return true; } - SpanState _fetchSpanState(EggsTime time, InodeId fileId, uint64_t spanEnd) { + SpanState _fetchSpanState(TernTime time, InodeId fileId, uint64_t spanEnd) { // See if it's a normal file first std::string fileValue; ExternalValue file; auto err = _getFile({}, fileId, fileValue, file); - ALWAYS_ASSERT(err == EggsError::NO_ERROR || err == EggsError::FILE_NOT_FOUND); - if (err == EggsError::NO_ERROR) { + ALWAYS_ASSERT(err == TernError::NO_ERROR || err == TernError::FILE_NOT_FOUND); + if (err == TernError::NO_ERROR) { return SpanState::CLEAN; } // couldn't find normal file, must be transient ExternalValue transientFile; err = _getTransientFile({}, time, true, fileId, fileValue, transientFile); - ALWAYS_ASSERT(err == EggsError::NO_ERROR); + ALWAYS_ASSERT(err == TernError::NO_ERROR); if (spanEnd == transientFile().fileSize()) { return transientFile().lastSpanState(); } else { @@ -3499,22 +3499,22 @@ struct ShardDBImpl { } } - EggsError _applySwapBlocks(EggsTime time, rocksdb::WriteBatch& batch, const SwapBlocksEntry& entry, SwapBlocksResp& resp) { + TernError _applySwapBlocks(TernTime time, rocksdb::WriteBatch& batch, const SwapBlocksEntry& entry, SwapBlocksResp& resp) { // Fetch spans StaticValue span1Key; std::string span1Value; ExternalValue span1; if (!_fetchSpan(entry.fileId1, entry.byteOffset1, span1Key, span1Value, span1)) { - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } StaticValue span2Key; std::string span2Value; ExternalValue span2; if (!_fetchSpan(entry.fileId2, entry.byteOffset2, span2Key, span2Value, span2)) { - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } if (span1().isInlineStorage() || span2().isInlineStorage()) { - return EggsError::SWAP_BLOCKS_INLINE_STORAGE; + return TernError::SWAP_BLOCKS_INLINE_STORAGE; } // Fetch span state @@ -3522,7 +3522,7 @@ struct ShardDBImpl { auto state2 = _fetchSpanState(time, entry.fileId2, entry.byteOffset2 + span2().size()); // We don't want to put not-certified blocks in clean spans, or similar if (state1 != state2) { - return EggsError::SWAP_BLOCKS_MISMATCHING_STATE; + return TernError::SWAP_BLOCKS_MISMATCHING_STATE; } // Find blocks const auto findBlock = [](const SpanBody& span, uint64_t blockId, BlockBody& block) -> std::pair { @@ -3546,23 +3546,23 @@ struct ShardDBImpl { // if neither are found, check if we haven't swapped already, for idempotency if (block1Ix.first < 0 && block2Ix.first < 0) { if (findBlock(span1(), entry.blockId2, block2).first >= 0 && findBlock(span2(), entry.blockId1, block1).first >= 0) { - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } } - return EggsError::BLOCK_NOT_FOUND; + return TernError::BLOCK_NOT_FOUND; } auto blocks1 = span1().blocksBodyReadOnly(block1Ix.first); auto blocks2 = span2().blocksBodyReadOnly(block2Ix.first); uint32_t blockSize1 = blocks1.cellSize()*blocks1.stripes(); uint32_t blockSize2 = blocks2.cellSize()*blocks2.stripes(); if (blockSize1 != blockSize2) { - return EggsError::SWAP_BLOCKS_MISMATCHING_SIZE; + return TernError::SWAP_BLOCKS_MISMATCHING_SIZE; } if (block1.crc() != block2.crc()) { - return EggsError::SWAP_BLOCKS_MISMATCHING_CRC; + return TernError::SWAP_BLOCKS_MISMATCHING_CRC; } if (blocks1.location() != blocks2.location()) { - return EggsError::SWAP_BLOCKS_MISMATCHING_LOCATION; + return TernError::SWAP_BLOCKS_MISMATCHING_LOCATION; } auto blockServiceCache = _blockServicesCache.getCache(); @@ -3575,22 +3575,22 @@ struct ShardDBImpl { } const auto block = blocks.block(i); if (block.blockService() == newBlock.blockService()) { - return EggsError::SWAP_BLOCKS_DUPLICATE_BLOCK_SERVICE; + return TernError::SWAP_BLOCKS_DUPLICATE_BLOCK_SERVICE; } if (newFailureDomain == blockServiceCache.blockServices.at(block.blockService().u64).failureDomain) { - return EggsError::SWAP_BLOCKS_DUPLICATE_FAILURE_DOMAIN; + return TernError::SWAP_BLOCKS_DUPLICATE_FAILURE_DOMAIN; } } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; }; { - EggsError err = checkNoDuplicateBlockServicesOrFailureDomains(blocks1, block1Ix.second, block2); - if (err != EggsError::NO_ERROR) { + TernError err = checkNoDuplicateBlockServicesOrFailureDomains(blocks1, block1Ix.second, block2); + if (err != TernError::NO_ERROR) { return err; } err = checkNoDuplicateBlockServicesOrFailureDomains(blocks2, block2Ix.second, block1); - if (err != EggsError::NO_ERROR) { + if (err != TernError::NO_ERROR) { return err; } } @@ -3604,24 +3604,24 @@ struct ShardDBImpl { swapBlocks(block1, block2); ROCKS_DB_CHECKED(batch.Put(_spansCf, span1Key.toSlice(), span1.toSlice())); ROCKS_DB_CHECKED(batch.Put(_spansCf, span2Key.toSlice(), span2.toSlice())); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyMoveSpan(EggsTime time, rocksdb::WriteBatch& batch, const MoveSpanEntry& entry, MoveSpanResp& resp) { + TernError _applyMoveSpan(TernTime time, rocksdb::WriteBatch& batch, const MoveSpanEntry& entry, MoveSpanResp& resp) { // fetch files std::string transientValue1; ExternalValue transientFile1; { - EggsError err = _initiateTransientFileModification(time, true, batch, entry.fileId1, transientValue1, transientFile1); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateTransientFileModification(time, true, batch, entry.fileId1, transientValue1, transientFile1); + if (err != TernError::NO_ERROR) { return err; } } std::string transientValue2; ExternalValue transientFile2; { - EggsError err = _initiateTransientFileModification(time, true, batch, entry.fileId2, transientValue2, transientFile2); - if (err != EggsError::NO_ERROR) { + TernError err = _initiateTransientFileModification(time, true, batch, entry.fileId2, transientValue2, transientFile2); + if (err != TernError::NO_ERROR) { return err; } } @@ -3632,7 +3632,7 @@ struct ShardDBImpl { transientFile1().fileSize() == entry.byteOffset1 && transientFile1().lastSpanState() == SpanState::CLEAN && transientFile2().fileSize() == entry.byteOffset2 + entry.spanSize && transientFile2().lastSpanState() == SpanState::DIRTY ) { - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } if ( transientFile1().lastSpanState() != SpanState::DIRTY || @@ -3641,7 +3641,7 @@ struct ShardDBImpl { transientFile2().fileSize() != entry.byteOffset2 ) { LOG_DEBUG(_env, "span not found because of offset checks"); - return EggsError::SPAN_NOT_FOUND; // TODO better error? + return TernError::SPAN_NOT_FOUND; // TODO better error? } // fetch span to move StaticValue spanKey; @@ -3651,14 +3651,14 @@ struct ShardDBImpl { auto status = _db->Get({}, _spansCf, spanKey.toSlice(), &spanValue); if (status.IsNotFound()) { LOG_DEBUG(_env, "span not found in db (this should probably never happen)"); - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } ROCKS_DB_CHECKED(status); ExternalValue span(spanValue); ExternalValue spanBody(spanValue); if (spanBody().spanSize() != entry.spanSize) { LOG_DEBUG(_env, "span not found because of differing sizes"); - return EggsError::SPAN_NOT_FOUND; // TODO better error + return TernError::SPAN_NOT_FOUND; // TODO better error } // move span ROCKS_DB_CHECKED(batch.Delete(_spansCf, spanKey.toSlice())); @@ -3687,38 +3687,38 @@ struct ShardDBImpl { _addBlockServicesToFiles(batch, block.blockService(), entry.fileId2, +1); } // we're done - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applySwapSpans(EggsTime time, rocksdb::WriteBatch& batch, const SwapSpansEntry& entry, SwapSpansResp& resp) { + TernError _applySwapSpans(TernTime time, rocksdb::WriteBatch& batch, const SwapSpansEntry& entry, SwapSpansResp& resp) { StaticValue span1Key; std::string span1Value; ExternalValue span1; if (!_fetchSpan(entry.fileId1, entry.byteOffset1, span1Key, span1Value, span1)) { - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } StaticValue span2Key; std::string span2Value; ExternalValue span2; if (!_fetchSpan(entry.fileId2, entry.byteOffset2, span2Key, span2Value, span2)) { - return EggsError::SPAN_NOT_FOUND; + return TernError::SPAN_NOT_FOUND; } if (span1().isInlineStorage() || span2().isInlineStorage()) { - return EggsError::SWAP_SPANS_INLINE_STORAGE; + return TernError::SWAP_SPANS_INLINE_STORAGE; } // check that size and crc is the same if (span1().spanSize() != span2().spanSize()) { - return EggsError::SWAP_SPANS_MISMATCHING_SIZE; + return TernError::SWAP_SPANS_MISMATCHING_SIZE; } if (span1().crc() != span2().crc()) { - return EggsError::SWAP_SPANS_MISMATCHING_CRC; + return TernError::SWAP_SPANS_MISMATCHING_CRC; } // Fetch span state auto state1 = _fetchSpanState(time, entry.fileId1, entry.byteOffset1 + span1().size()); auto state2 = _fetchSpanState(time, entry.fileId2, entry.byteOffset2 + span2().size()); if (state1 != SpanState::CLEAN || state2 != SpanState::CLEAN) { - return EggsError::SWAP_SPANS_NOT_CLEAN; + return TernError::SWAP_SPANS_NOT_CLEAN; } // check if we've already swapped const auto blocksMatch = [](const SpanBody& span, const BincodeList& blocks) { @@ -3733,10 +3733,10 @@ struct ShardDBImpl { return true; }; if (blocksMatch(span1(), entry.blocks2) && blocksMatch(span2(), entry.blocks1)) { - return EggsError::NO_ERROR; // we're already done + return TernError::NO_ERROR; // we're already done } if (!(blocksMatch(span1(), entry.blocks1) && blocksMatch(span2(), entry.blocks2))) { - return EggsError::SWAP_SPANS_MISMATCHING_BLOCKS; + return TernError::SWAP_SPANS_MISMATCHING_BLOCKS; } // we're ready to swap, first do the blocks bookkeeping const auto adjustBlockServices = [this, &batch](const SpanBody& span, InodeId addTo, InodeId subtractFrom) { @@ -3754,19 +3754,19 @@ struct ShardDBImpl { // now do the swap ROCKS_DB_CHECKED(batch.Put(_spansCf, span1Key.toSlice(), span2.toSlice())); ROCKS_DB_CHECKED(batch.Put(_spansCf, span2Key.toSlice(), span1.toSlice())); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applySetTime(EggsTime time, rocksdb::WriteBatch& batch, const SetTimeEntry& entry, SetTimeResp& resp) { + TernError _applySetTime(TernTime time, rocksdb::WriteBatch& batch, const SetTimeEntry& entry, SetTimeResp& resp) { std::string fileValue; ExternalValue file; - EggsError err = _getFile({}, entry.id, fileValue, file); - if (err != EggsError::NO_ERROR) { + TernError err = _getFile({}, entry.id, fileValue, file); + if (err != TernError::NO_ERROR) { return err; } - const auto set = [&file](uint64_t entryT, void (FileBody::*setTime)(EggsTime t)) { + const auto set = [&file](uint64_t entryT, void (FileBody::*setTime)(TernTime t)) { if (entryT & (1ull<<63)) { - EggsTime t = entryT & ~(1ull<<63); + TernTime t = entryT & ~(1ull<<63); (file().*setTime)(t); } }; @@ -3776,10 +3776,10 @@ struct ShardDBImpl { auto fileKey = InodeIdKey::Static(entry.id); ROCKS_DB_CHECKED(batch.Put(_filesCf, fileKey.toSlice(), file.toSlice())); } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _applyRemoveZeroBlockServiceFiles(EggsTime time, rocksdb::WriteBatch& batch, const RemoveZeroBlockServiceFilesEntry& entry, RemoveZeroBlockServiceFilesResp& resp) { + TernError _applyRemoveZeroBlockServiceFiles(TernTime time, rocksdb::WriteBatch& batch, const RemoveZeroBlockServiceFilesEntry& entry, RemoveZeroBlockServiceFilesResp& resp) { // Max number of entries we'll look at, otherwise each req will spend tons of time // iterating. int maxEntries = 1'000; @@ -3816,7 +3816,7 @@ struct ShardDBImpl { resp.nextBlockService = 0; resp.nextFile = NULL_INODE_ID; } - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } void applyLogEntry(uint64_t logIndex, const ShardLogEntry& logEntry, ShardRespContainer& resp) { @@ -3826,7 +3826,7 @@ struct ShardDBImpl { LOG_DEBUG(_env, "applying log at index %s", logIndex); auto locked = _applyLogEntryLock.lock(); resp.clear(); - auto err = EggsError::NO_ERROR; + auto err = TernError::NO_ERROR; rocksdb::WriteBatch batch; _advanceLastAppliedLogEntry(batch, logIndex); @@ -3839,7 +3839,7 @@ struct ShardDBImpl { batch.SetSavePoint(); std::string entryScratch; - EggsTime time = logEntry.time; + TernTime time = logEntry.time; const auto& logEntryBody = logEntry.body; LOG_TRACE(_env, "about to apply log entry %s", logEntryBody); @@ -3968,10 +3968,10 @@ struct ShardDBImpl { err = _applyRemoveZeroBlockServiceFiles(time, batch, logEntryBody.getRemoveZeroBlockServiceFiles(), resp.setRemoveZeroBlockServiceFiles()); break; default: - throw EGGS_EXCEPTION("bad log entry kind %s", logEntryBody.kind()); + throw TERN_EXCEPTION("bad log entry kind %s", logEntryBody.kind()); } - if (err != EggsError::NO_ERROR) { + if (err != TernError::NO_ERROR) { resp.setError() = err; LOG_DEBUG(_env, "could not apply log entry %s, index %s, because of err %s", logEntryBody.kind(), logIndex, err); batch.RollbackToSavePoint(); @@ -3996,75 +3996,75 @@ struct ShardDBImpl { return v().u64(); } - EggsError _getDirectory(const rocksdb::ReadOptions& options, InodeId id, bool allowSnapshot, std::string& dirValue, ExternalValue& dir) { + TernError _getDirectory(const rocksdb::ReadOptions& options, InodeId id, bool allowSnapshot, std::string& dirValue, ExternalValue& dir) { if (unlikely(id.type() != InodeType::DIRECTORY)) { - return EggsError::TYPE_IS_NOT_DIRECTORY; + return TernError::TYPE_IS_NOT_DIRECTORY; } auto k = InodeIdKey::Static(id); auto status = _db->Get(options, _directoriesCf, k.toSlice(), &dirValue); if (status.IsNotFound()) { - return EggsError::DIRECTORY_NOT_FOUND; + return TernError::DIRECTORY_NOT_FOUND; } ROCKS_DB_CHECKED(status); auto tmpDir = ExternalValue(dirValue); if (!allowSnapshot && (tmpDir().ownerId() == NULL_INODE_ID && id != ROOT_DIR_INODE_ID)) { // root dir never has an owner - return EggsError::DIRECTORY_NOT_FOUND; + return TernError::DIRECTORY_NOT_FOUND; } dir = tmpDir; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _getDirectoryAndHash(const rocksdb::ReadOptions& options, InodeId id, bool allowSnapshot, const BincodeBytesRef& name, uint64_t& nameHash) { + TernError _getDirectoryAndHash(const rocksdb::ReadOptions& options, InodeId id, bool allowSnapshot, const BincodeBytesRef& name, uint64_t& nameHash) { std::string dirValue; ExternalValue dir; - EggsError err = _getDirectory(options, id, allowSnapshot, dirValue, dir); - if (err != EggsError::NO_ERROR) { + TernError err = _getDirectory(options, id, allowSnapshot, dirValue, dir); + if (err != TernError::NO_ERROR) { return err; } nameHash = EdgeKey::computeNameHash(dir().hashMode(), name); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _getFile(const rocksdb::ReadOptions& options, InodeId id, std::string& fileValue, ExternalValue& file) { + TernError _getFile(const rocksdb::ReadOptions& options, InodeId id, std::string& fileValue, ExternalValue& file) { if (unlikely(id.type() != InodeType::FILE && id.type() != InodeType::SYMLINK)) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } auto k = InodeIdKey::Static(id); auto status = _db->Get(options, _filesCf, k.toSlice(), &fileValue); if (status.IsNotFound()) { - return EggsError::FILE_NOT_FOUND; + return TernError::FILE_NOT_FOUND; } ROCKS_DB_CHECKED(status); file = ExternalValue(fileValue); - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _getTransientFile(const rocksdb::ReadOptions& options, EggsTime time, bool allowPastDeadline, InodeId id, std::string& value, ExternalValue& file) { + TernError _getTransientFile(const rocksdb::ReadOptions& options, TernTime time, bool allowPastDeadline, InodeId id, std::string& value, ExternalValue& file) { if (id.type() != InodeType::FILE && id.type() != InodeType::SYMLINK) { - return EggsError::TYPE_IS_DIRECTORY; + return TernError::TYPE_IS_DIRECTORY; } auto k = InodeIdKey::Static(id); auto status = _db->Get(options, _transientCf, k.toSlice(), &value); if (status.IsNotFound()) { - return EggsError::FILE_NOT_FOUND; + return TernError::FILE_NOT_FOUND; } ROCKS_DB_CHECKED(status); auto tmpFile = ExternalValue(value); if (!allowPastDeadline && time > tmpFile().deadline()) { // this should be fairly uncommon LOG_INFO(_env, "not picking up transient file %s since its deadline %s is past the log entry time %s", id, tmpFile().deadline(), time); - return EggsError::FILE_NOT_FOUND; + return TernError::FILE_NOT_FOUND; } file = tmpFile; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } - EggsError _initiateTransientFileModification( - EggsTime time, bool allowPastDeadline, rocksdb::WriteBatch& batch, InodeId id, std::string& tfValue, ExternalValue& tf + TernError _initiateTransientFileModification( + TernTime time, bool allowPastDeadline, rocksdb::WriteBatch& batch, InodeId id, std::string& tfValue, ExternalValue& tf ) { ExternalValue tmpTf; - EggsError err = _getTransientFile({}, time, allowPastDeadline, id, tfValue, tmpTf); - if (err != EggsError::NO_ERROR) { + TernError err = _getTransientFile({}, time, allowPastDeadline, id, tfValue, tmpTf); + if (err != TernError::NO_ERROR) { return err; } @@ -4072,7 +4072,7 @@ struct ShardDBImpl { // with directories, but still seems good hygiene. if (tmpTf().mtime() >= time) { RAISE_ALERT_APP_TYPE(_env, XmonAppType::DAYTIME, "trying to modify transient file %s going backwards in time, file mtime is %s, log entry time is %s", id, tmpTf().mtime(), time); - return EggsError::MTIME_IS_TOO_RECENT; + return TernError::MTIME_IS_TOO_RECENT; } tmpTf().setMtime(time); @@ -4087,7 +4087,7 @@ struct ShardDBImpl { } tf = tmpTf; - return EggsError::NO_ERROR; + return TernError::NO_ERROR; } std::shared_ptr _getCurrentReadSnapshot() { @@ -4210,13 +4210,13 @@ bool readOnlyShardReq(const ShardMessageKind kind) { case ShardMessageKind::SCRAP_TRANSIENT_FILE: return false; case ShardMessageKind::ERROR: - throw EGGS_EXCEPTION("unexpected ERROR shard message kind"); + throw TERN_EXCEPTION("unexpected ERROR shard message kind"); case ShardMessageKind::EMPTY: - throw EGGS_EXCEPTION("unexpected EMPTY shard message kind"); + throw TERN_EXCEPTION("unexpected EMPTY shard message kind"); break; } - throw EGGS_EXCEPTION("bad message kind %s", kind); + throw TERN_EXCEPTION("bad message kind %s", kind); } ShardDB::ShardDB(Logger& logger, std::shared_ptr& agent, ShardId shid, uint8_t location, Duration deadlineInterval, const SharedRocksDB& sharedDB, const BlockServicesCacheDB& blockServicesCache) { @@ -4236,7 +4236,7 @@ uint64_t ShardDB::read(const ShardReqContainer& req, ShardRespContainer& resp) { return ((ShardDBImpl*)_impl)->read(req, resp); } -EggsError ShardDB::prepareLogEntry(const ShardReqContainer& req, ShardLogEntry& logEntry) { +TernError ShardDB::prepareLogEntry(const ShardReqContainer& req, ShardLogEntry& logEntry) { return ((ShardDBImpl*)_impl)->prepareLogEntry(req, logEntry); } diff --git a/cpp/shard/ShardDB.hpp b/cpp/shard/ShardDB.hpp index 74d4d18b..badf1279 100644 --- a/cpp/shard/ShardDB.hpp +++ b/cpp/shard/ShardDB.hpp @@ -13,7 +13,7 @@ struct ShardLogEntry { LogIdx idx; - EggsTime time; + TernTime time; ShardLogEntryContainer body; bool operator==(const ShardLogEntry& rhs) const { @@ -81,7 +81,7 @@ public: // for some span request or something like that). // // As usual, if an error is returned, the contents of `logEntry` should be ignored. - EggsError prepareLogEntry(const ShardReqContainer& req, ShardLogEntry& logEntry); + TernError prepareLogEntry(const ShardReqContainer& req, ShardLogEntry& logEntry); // The index of the last log entry persisted to the DB uint64_t lastAppliedLogEntry(); diff --git a/cpp/shard/ShardDBData.cpp b/cpp/shard/ShardDBData.cpp index 1d2dba72..03a46ed8 100644 --- a/cpp/shard/ShardDBData.cpp +++ b/cpp/shard/ShardDBData.cpp @@ -34,6 +34,6 @@ uint64_t EdgeKey::computeNameHash(HashMode mode, const BincodeBytesRef& bytes) { case HashMode::XXH3_63: return XXH3_64bits(bytes.data(), bytes.size()) & ~(1ull<<63); default: - throw EGGS_EXCEPTION("bad hash mode %s", (int)mode); + throw TERN_EXCEPTION("bad hash mode %s", (int)mode); } } diff --git a/cpp/shard/ShardDBData.hpp b/cpp/shard/ShardDBData.hpp index e04845e5..c6b6f15f 100644 --- a/cpp/shard/ShardDBData.hpp +++ b/cpp/shard/ShardDBData.hpp @@ -51,8 +51,8 @@ struct TransientFileBody { FIELDS( LE, uint8_t, version, setVersion, LE, uint64_t, fileSize, setFileSize, - LE, EggsTime, mtime, setMtime, - LE, EggsTime, deadline, setDeadline, + LE, TernTime, mtime, setMtime, + LE, TernTime, deadline, setDeadline, LE, SpanState, lastSpanState, setLastSpanState, EMIT_OFFSET, STATIC_SIZE, BYTES, note, setNoteDangerous, // dangerous because we might not have enough space @@ -75,8 +75,8 @@ struct FileBody { FIELDS( LE, uint8_t, version, setVersion, LE, uint64_t, fileSize, setFileSize, - LE, EggsTime, mtime, setMtime, - LE, EggsTime, atime, setAtime, + LE, TernTime, mtime, setMtime, + LE, TernTime, atime, setAtime, END_STATIC ) }; @@ -529,7 +529,7 @@ struct DirectoryBody { FIELDS( LE, uint8_t, version, setVersion, LE, InodeId, ownerId, setOwnerId, - LE, EggsTime, mtime, setMtime, + LE, TernTime, mtime, setMtime, LE, HashMode, hashMode, setHashMode, LE, uint16_t, infoLength, setInfoLength, EMIT_OFFSET, MIN_SIZE, @@ -571,7 +571,7 @@ struct EdgeKey { BYTES, name, setName, // only present for snapshot edges -- current edges have // the creation time in the body. - BE, EggsTime, creationTimeUnchecked, setCreationTimeUnchecked, + BE, TernTime, creationTimeUnchecked, setCreationTimeUnchecked, END ) @@ -579,12 +579,12 @@ struct EdgeKey { STATIC_SIZE + sizeof(uint8_t); // nameLength // max name size, and an optional creation time if current=false - static constexpr size_t MAX_SIZE = MIN_SIZE + 255 + sizeof(EggsTime); + static constexpr size_t MAX_SIZE = MIN_SIZE + 255 + sizeof(TernTime); size_t size() const { size_t sz = MIN_SIZE + name().size(); if (snapshot()) { - sz += sizeof(EggsTime); + sz += sizeof(TernTime); } return sz; } @@ -610,12 +610,12 @@ struct EdgeKey { return InodeId::FromU64(dirIdWithCurrentU64() >> 1); } - EggsTime creationTime() const { + TernTime creationTime() const { ALWAYS_ASSERT(snapshot()); return creationTimeUnchecked(); } - void setCreationTime(EggsTime creationTime) { + void setCreationTime(TernTime creationTime) { ALWAYS_ASSERT(snapshot()); setCreationTimeUnchecked(creationTime); } @@ -637,7 +637,7 @@ struct CurrentEdgeBody { FIELDS( LE, uint8_t, version, setVersion, LE, InodeIdExtra, targetIdWithLocked, setTargetIdWithLocked, - LE, EggsTime, creationTime, setCreationTime, + LE, TernTime, creationTime, setCreationTime, END_STATIC ) diff --git a/cpp/shard/eggsshard.cpp b/cpp/shard/ternshard.cpp similarity index 97% rename from cpp/shard/eggsshard.cpp rename to cpp/shard/ternshard.cpp index 1db89c29..b577fe3d 100644 --- a/cpp/shard/eggsshard.cpp +++ b/cpp/shard/ternshard.cpp @@ -23,7 +23,7 @@ static void usage(const char* binary) { fprintf(stderr, " -verbose\n"); fprintf(stderr, " Same as '-log-level debug'.\n"); fprintf(stderr, " -shuckle host:port\n"); - fprintf(stderr, " How to reach shuckle, default '%s'\n", defaultShuckleAddress.c_str()); + fprintf(stderr, " How to reach shuckle"); fprintf(stderr, " -addr ipv4 ip:port\n"); fprintf(stderr, " Addresses we bind ourselves too and advertise to shuckle. At least one needs to be provided and at most 2\n"); fprintf(stderr, " -log-file string\n"); @@ -143,7 +143,7 @@ int main(int argc, char** argv) { ShardOptions options; std::vector args; - std::string shuckleAddress = defaultShuckleAddress; + std::string shuckleAddress; uint8_t numAddressesFound = 0; for (int i = 1; i < argc; i++) { const auto getNextArg = [argc, &argv, &dieWithUsage, &i]() { @@ -227,12 +227,17 @@ int main(int argc, char** argv) { args.emplace_back("0"); } -#ifndef EGGS_DEBUG +#ifndef TERN_DEBUG if (options.logLevel <= LogLevel::LOG_TRACE) { die("Cannot use trace for non-debug builds (it won't work)."); } #endif + if (shuckleAddress.empty()) { + fprintf(stderr, "Must provide -shuckle."); + dieWithUsage(); + } + if (!parseShuckleAddress(shuckleAddress, options.shuckleHost, options.shucklePort)) { fprintf(stderr, "Bad shuckle address '%s'.\n\n", shuckleAddress.c_str()); dieWithUsage(); diff --git a/cpp/tests/CMakeLists.txt b/cpp/tests/CMakeLists.txt index 01ae3faa..59c533bc 100644 --- a/cpp/tests/CMakeLists.txt +++ b/cpp/tests/CMakeLists.txt @@ -1,4 +1,4 @@ -include_directories(${eggsfs_SOURCE_DIR}/core ${eggsfs_SOURCE_DIR}/shard ${eggsfs_SOURCE_DIR}/wyhash) +include_directories(${ternfs_SOURCE_DIR}/core ${ternfs_SOURCE_DIR}/shard ${ternfs_SOURCE_DIR}/wyhash) add_executable(tests tests.cpp doctest.h) target_link_libraries(tests PRIVATE core shard cdc) diff --git a/cpp/tests/logsdbtests.cpp b/cpp/tests/logsdbtests.cpp index 1dc7220f..790a551a 100644 --- a/cpp/tests/logsdbtests.cpp +++ b/cpp/tests/logsdbtests.cpp @@ -28,7 +28,7 @@ std::ostream& operator<<(std::ostream& out, const std::vector& d TEST_CASE("EmptyLogsDBNoOverrides") { // init time control - _setCurrentTime(eggsNow()); + _setCurrentTime(ternNow()); TempLogsDB db(LogLevel::LOG_ERROR); std::vector entries; std::vector inReq; @@ -46,7 +46,7 @@ TEST_CASE("EmptyLogsDBNoOverrides") { initEntry(5, "entry5"), }; - REQUIRE(db->appendEntries(entries) == EggsError::LEADER_PREEMPTED); + REQUIRE(db->appendEntries(entries) == TernError::LEADER_PREEMPTED); db->getOutgoingMessages(outReq, outResp); REQUIRE(outReq.empty()); REQUIRE(outResp.empty()); @@ -83,7 +83,7 @@ TEST_CASE("EmptyLogsDBNoOverrides") { for (auto& resp : outResp) { REQUIRE(resp.replicaId == token.replica()); REQUIRE(resp.msg.body.kind() == LogMessageKind::LOG_WRITE); - REQUIRE(resp.msg.body.getLogWrite().result == EggsError::NO_ERROR); + REQUIRE(resp.msg.body.getLogWrite().result == TernError::NO_ERROR); reqIds.erase(resp.msg.id); } REQUIRE(reqIds.empty()); @@ -116,7 +116,7 @@ TEST_CASE("EmptyLogsDBNoOverrides") { } TEST_CASE("LogsDBStandAloneLeader") { - _setCurrentTime(eggsNow()); + _setCurrentTime(ternNow()); LogIdx readUpTo = 0; TempLogsDB db(LogLevel::LOG_ERROR, 0, readUpTo,true,false); @@ -126,7 +126,7 @@ TEST_CASE("LogsDBStandAloneLeader") { std::vector outReq; std::vector outResp; db->processIncomingMessages(inReq, inResp); - _setCurrentTime(eggsNow() + LogsDB::LEADER_INACTIVE_TIMEOUT + 1_ms); + _setCurrentTime(ternNow() + LogsDB::LEADER_INACTIVE_TIMEOUT + 1_ms); db->processIncomingMessages(inReq, inResp); REQUIRE(db->isLeader()); @@ -137,7 +137,7 @@ TEST_CASE("LogsDBStandAloneLeader") { }; auto err = db->appendEntries(entries); db->processIncomingMessages(inReq, inResp); - REQUIRE(err == EggsError::NO_ERROR); + REQUIRE(err == TernError::NO_ERROR); for(size_t i = 0; i < entries.size(); ++i) { REQUIRE(entries[i].idx == readUpTo + i + 1); } @@ -148,7 +148,7 @@ TEST_CASE("LogsDBStandAloneLeader") { } TEST_CASE("LogsDBAvoidBeingLeader") { - _setCurrentTime(eggsNow()); + _setCurrentTime(ternNow()); TempLogsDB db(LogLevel::LOG_ERROR, 0, 0, true, true); REQUIRE_FALSE(db->isLeader()); std::vector inReq; @@ -160,7 +160,7 @@ TEST_CASE("LogsDBAvoidBeingLeader") { REQUIRE(outResp.empty()); REQUIRE(outReq.empty()); REQUIRE(db->getNextTimeout() == LogsDB::LEADER_INACTIVE_TIMEOUT); - _setCurrentTime(eggsNow() + LogsDB::LEADER_INACTIVE_TIMEOUT + 1_ms); + _setCurrentTime(ternNow() + LogsDB::LEADER_INACTIVE_TIMEOUT + 1_ms); // Tick db->processIncomingMessages(inReq, inResp); @@ -171,7 +171,7 @@ TEST_CASE("LogsDBAvoidBeingLeader") { } TEST_CASE("EmptyLogsDBLeaderElection") { - _setCurrentTime(eggsNow()); + _setCurrentTime(ternNow()); TempLogsDB db(LogLevel::LOG_ERROR); REQUIRE_FALSE(db->isLeader()); std::vector inReq; @@ -183,7 +183,7 @@ TEST_CASE("EmptyLogsDBLeaderElection") { REQUIRE(outResp.empty()); REQUIRE(outReq.empty()); REQUIRE(db->getNextTimeout() == LogsDB::LEADER_INACTIVE_TIMEOUT); - _setCurrentTime(eggsNow() + LogsDB::LEADER_INACTIVE_TIMEOUT + 1_ms); + _setCurrentTime(ternNow() + LogsDB::LEADER_INACTIVE_TIMEOUT + 1_ms); // Tick db->processIncomingMessages(inReq, inResp); diff --git a/cpp/tests/tests.cpp b/cpp/tests/tests.cpp index a89ea1ab..390e1b6f 100644 --- a/cpp/tests/tests.cpp +++ b/cpp/tests/tests.cpp @@ -436,13 +436,13 @@ struct TempShardDB { } }; -#define NO_EGGS_ERROR(expr) \ +#define NO_TERN_ERROR(expr) \ do { \ - EggsError err = (expr); \ - ALWAYS_ASSERT(err == EggsError::NO_ERROR, #expr ", unexpected error %s", err); \ + TernError err = (expr); \ + ALWAYS_ASSERT(err == TernError::NO_ERROR, #expr ", unexpected error %s", err); \ } while(false) -#define NO_EGGS_ERROR_IN_RESPONSE(resp, expr) \ +#define NO_TERN_ERROR_IN_RESPONSE(resp, expr) \ do { \ (expr); \ ALWAYS_ASSERT((int)((resp).kind()) != 0, #expr ", unexpected error %s", (resp).getError()); \ @@ -458,15 +458,15 @@ TEST_CASE("touch file") { InodeId id; BincodeFixedBytes<8> cookie; - EggsTime constructTime, linkTime; + TernTime constructTime, linkTime; BincodeBytes name("filename"); { auto& req = reqContainer->setConstructFile(); req.type = (uint8_t)InodeType::FILE; req.note = "test note"; - NO_EGGS_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); + NO_TERN_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); constructTime = logEntry->time; - NO_EGGS_ERROR_IN_RESPONSE(*respContainer, db->applyLogEntry(++logEntryIndex, *logEntry, *respContainer)); + NO_TERN_ERROR_IN_RESPONSE(*respContainer, db->applyLogEntry(++logEntryIndex, *logEntry, *respContainer)); db->flush(false); auto& resp = respContainer->getConstructFile(); id = resp.id; @@ -474,7 +474,7 @@ TEST_CASE("touch file") { } { auto& req = reqContainer->setVisitTransientFiles(); - NO_EGGS_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); + NO_TERN_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); auto& resp = respContainer->getVisitTransientFiles(); REQUIRE(resp.nextId == NULL_INODE_ID); REQUIRE(resp.files.els.size() == 1); @@ -487,16 +487,16 @@ TEST_CASE("touch file") { req.cookie = cookie; req.ownerId = ROOT_DIR_INODE_ID; req.name = name; - NO_EGGS_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); + NO_TERN_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); linkTime = logEntry->time; - NO_EGGS_ERROR_IN_RESPONSE(*respContainer, db->applyLogEntry(++logEntryIndex, *logEntry, *respContainer)); + NO_TERN_ERROR_IN_RESPONSE(*respContainer, db->applyLogEntry(++logEntryIndex, *logEntry, *respContainer)); db->flush(false); } { auto& req = reqContainer->setReadDir(); req.dirId = ROOT_DIR_INODE_ID; req.startHash = 0; - NO_EGGS_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); + NO_TERN_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); auto& resp = respContainer->getReadDir(); REQUIRE(resp.nextHash == 0); REQUIRE(resp.results.els.size() == 1); @@ -509,14 +509,14 @@ TEST_CASE("touch file") { auto& req = reqContainer->setLookup(); req.dirId = ROOT_DIR_INODE_ID; req.name = name; - NO_EGGS_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); + NO_TERN_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); auto& resp = respContainer->getLookup(); REQUIRE(resp.targetId == id); } { auto& req = reqContainer->setStatFile(); req.id = id; - NO_EGGS_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); + NO_TERN_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); auto& resp = respContainer->getStatFile(); REQUIRE(resp.size == 0); REQUIRE(resp.mtime == linkTime); @@ -531,15 +531,15 @@ TEST_CASE("override") { auto logEntry = std::make_unique(); uint64_t logEntryIndex = 0; - const auto createFile = [&](const char* name) -> std::tuple { + const auto createFile = [&](const char* name) -> std::tuple { InodeId id; BincodeFixedBytes<8> cookie; { auto& req = reqContainer->setConstructFile(); req.type = (uint8_t)InodeType::FILE; req.note = "test note"; - NO_EGGS_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); - NO_EGGS_ERROR_IN_RESPONSE(*respContainer, db->applyLogEntry(++logEntryIndex, *logEntry, *respContainer)); + NO_TERN_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); + NO_TERN_ERROR_IN_RESPONSE(*respContainer, db->applyLogEntry(++logEntryIndex, *logEntry, *respContainer)); db->flush(false); auto& resp = respContainer->getConstructFile(); id = resp.id; @@ -547,17 +547,17 @@ TEST_CASE("override") { } { auto& req = reqContainer->setVisitTransientFiles(); - NO_EGGS_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); + NO_TERN_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); } - EggsTime creationTime; + TernTime creationTime; { auto& req = reqContainer->setLinkFile(); req.fileId = id; req.cookie = cookie; req.ownerId = ROOT_DIR_INODE_ID; req.name = name; - NO_EGGS_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); - NO_EGGS_ERROR_IN_RESPONSE(*respContainer, db->applyLogEntry(++logEntryIndex, *logEntry, *respContainer)); + NO_TERN_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); + NO_TERN_ERROR_IN_RESPONSE(*respContainer, db->applyLogEntry(++logEntryIndex, *logEntry, *respContainer)); db->flush(false); creationTime = respContainer->getLinkFile().creationTime; } @@ -574,14 +574,14 @@ TEST_CASE("override") { req.oldName = "foo"; req.oldCreationTime = fooCreationTime; req.newName = "bar"; - NO_EGGS_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); - NO_EGGS_ERROR_IN_RESPONSE(*respContainer, db->applyLogEntry(++logEntryIndex, *logEntry, *respContainer)); + NO_TERN_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); + NO_TERN_ERROR_IN_RESPONSE(*respContainer, db->applyLogEntry(++logEntryIndex, *logEntry, *respContainer)); db->flush(false); } { auto& req = reqContainer->setFullReadDir(); req.dirId = ROOT_DIR_INODE_ID; - NO_EGGS_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); + NO_TERN_ERROR_IN_RESPONSE(*respContainer, db->read(*reqContainer, *respContainer)); auto& resp = respContainer->getFullReadDir(); REQUIRE( resp.results.els.size() == @@ -595,12 +595,12 @@ TEST_CASE("override") { TEST_CASE("test fmt") { { std::stringstream ss; - ss << EggsTime(0); + ss << TernTime(0); REQUIRE(ss.str() == "1970-01-01T00:00:00.000000000"); } { std::stringstream ss; - ss << EggsTime(1234567891ull); + ss << TernTime(1234567891ull); REQUIRE(ss.str() == "1970-01-01T00:00:01.234567891"); } } @@ -624,21 +624,21 @@ TEST_CASE("make/rm directory") { req.id = id; req.info.inherited = true; req.ownerId = ROOT_DIR_INODE_ID; - NO_EGGS_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); - NO_EGGS_ERROR(db->applyLogEntry(true, ++logEntryIndex, *logEntry, *respContainer)); + NO_TERN_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); + NO_TERN_ERROR(db->applyLogEntry(true, ++logEntryIndex, *logEntry, *respContainer)); respContainer->getCreateDirectoryInode(); } { auto& req = reqContainer->setRemoveDirectoryOwner(); req.dirId = id; req.info = defaultInfo; - NO_EGGS_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); - NO_EGGS_ERROR(db->applyLogEntry(true, ++logEntryIndex, *logEntry, *respContainer)); + NO_TERN_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); + NO_TERN_ERROR(db->applyLogEntry(true, ++logEntryIndex, *logEntry, *respContainer)); } { auto& req = reqContainer->setStatDirectory(); req.id = id; - NO_EGGS_ERROR(db->read(*reqContainer, *respContainer)); + NO_TERN_ERROR(db->read(*reqContainer, *respContainer)); const auto& resp = respContainer->getStatDirectory(); CHECK(resp.info == defaultInfo); } @@ -646,8 +646,8 @@ TEST_CASE("make/rm directory") { auto& req = reqContainer->setSetDirectoryOwner(); req.dirId = id; req.ownerId = ROOT_DIR_INODE_ID; - NO_EGGS_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); - NO_EGGS_ERROR(db->applyLogEntry(true, ++logEntryIndex, *logEntry, *respContainer)); + NO_TERN_ERROR(db->prepareLogEntry(*reqContainer, *logEntry)); + NO_TERN_ERROR(db->applyLogEntry(true, ++logEntryIndex, *logEntry, *respContainer)); } } */ diff --git a/cpp/thirdparty.cmake b/cpp/thirdparty.cmake index 06d097e4..4e1e8df9 100644 --- a/cpp/thirdparty.cmake +++ b/cpp/thirdparty.cmake @@ -9,8 +9,7 @@ endif() # We build this manually because alpine doesn't have liburing-static ExternalProject_Add(make_uring DOWNLOAD_DIR ${CMAKE_CURRENT_BINARY_DIR} - # https://github.com/axboe/liburing/archive/refs/tags/liburing-2.3.tar.gz - URL https://REDACTED + URL https://github.com/axboe/liburing/archive/refs/tags/liburing-2.3.tar.gz URL_HASH SHA256=60b367dbdc6f2b0418a6e0cd203ee0049d9d629a36706fcf91dfb9428bae23c8 PREFIX thirdparty/uring UPDATE_COMMAND "" @@ -37,8 +36,7 @@ set_target_properties(uring PROPERTIES IMPORTED_LOCATION ${INSTALL_DIR}/lib/libu # Dependency of: rocksdb ExternalProject_Add(make_lz4 DOWNLOAD_DIR ${CMAKE_CURRENT_BINARY_DIR} - # https://github.com/lz4/lz4/archive/refs/tags/v1.9.4.tar.gz - URL https://REDACTED + URL https://github.com/lz4/lz4/archive/refs/tags/v1.9.4.tar.gz URL_HASH SHA256=0b0e3aa07c8c063ddf40b082bdf7e37a1562bda40a0ff5272957f3e987e0e54b PREFIX thirdparty/lz4 UPDATE_COMMAND "" @@ -64,8 +62,7 @@ set_target_properties(lz4 PROPERTIES IMPORTED_LOCATION ${INSTALL_DIR}/lib/liblz4 # Dependency of: rocksdb ExternalProject_Add(make_zstd DOWNLOAD_DIR ${CMAKE_CURRENT_BINARY_DIR} - # https://github.com/facebook/zstd/archive/refs/tags/v1.5.2.tar.gz - URL https://REDACTED + URL https://github.com/facebook/zstd/archive/refs/tags/v1.5.2.tar.gz URL_HASH SHA256=f7de13462f7a82c29ab865820149e778cbfe01087b3a55b5332707abf9db4a6e PREFIX thirdparty/zstd UPDATE_COMMAND "" @@ -98,8 +95,7 @@ separate_arguments( ) ExternalProject_Add(make_rocksdb DOWNLOAD_DIR ${CMAKE_CURRENT_BINARY_DIR} - # https://github.com/facebook/rocksdb/archive/refs/tags/v7.9.2.tar.gz - URL https://REDACTED + URL https://github.com/facebook/rocksdb/archive/refs/tags/v7.9.2.tar.gz URL_HASH SHA256=886378093098a1b2521b824782db7f7dd86224c232cf9652fcaf88222420b292 # When we upgraded dev boxes to newer arch and therefore newer clang this was # needed. New RocksDB (e.g. 8.10.0) compiles out of the box, but we don't have @@ -133,8 +129,7 @@ set_target_properties(rocksdb PROPERTIES IMPORTED_LOCATION ${INSTALL_DIR}/lib/li # Dependency of: eggs ExternalProject_Add(make_xxhash DOWNLOAD_DIR ${CMAKE_CURRENT_BINARY_DIR} - # https://github.com/Cyan4973/xxHash/archive/refs/tags/v0.8.1.tar.gz - URL https://REDACTED + URL https://github.com/Cyan4973/xxHash/archive/refs/tags/v0.8.1.tar.gz URL_HASH SHA256=3bb6b7d6f30c591dd65aaaff1c8b7a5b94d81687998ca9400082c739a690436c PREFIX thirdparty/xxhash UPDATE_COMMAND "" @@ -157,8 +152,7 @@ set_target_properties(xxhash PROPERTIES IMPORTED_LOCATION ${INSTALL_DIR}/lib/lib ExternalProject_Add(make_jemalloc DOWNLOAD_DIR ${CMAKE_CURRENT_BINARY_DIR} - # URL https://github.com/jemalloc/jemalloc/releases/download/5.3.0/jemalloc-5.3.0.tar.bz2 - URL https://REDACTED + URL https://github.com/jemalloc/jemalloc/releases/download/5.3.0/jemalloc-5.3.0.tar.bz2 URL_HASH SHA256=2db82d1e7119df3e71b7640219b6dfe84789bc0537983c3b7ac4f7189aecfeaa PREFIX thirdparty/jemalloc UPDATE_COMMAND "" diff --git a/cpp/wyhash/wyhash.h b/cpp/wyhash/wyhash.h index 52057fdb..11bf3f36 100644 --- a/cpp/wyhash/wyhash.h +++ b/cpp/wyhash/wyhash.h @@ -2,8 +2,8 @@ // and it only has one addition in the dependency chain. // // Actual code from . -#ifndef EGGS_WYHASH -#define EGGS_WYHASH +#ifndef TERN_WYHASH +#define TERN_WYHASH #include #include diff --git a/go/.gitignore b/go/.gitignore index 495fca94..348a800e 100644 --- a/go/.gitignore +++ b/go/.gitignore @@ -1,15 +1,15 @@ integrationtest/integrationtest -eggsshuckle/eggsshuckle -eggsfuse/eggsfuse +ternshuckle/ternshuckle +ternfuse/ternfuse restarter/restarter -eggsblocks/eggsblocks -eggsrun/eggsrun +ternblocks/ternblocks +ternrun/ternrun gcdaemon/gcdaemon -eggscli/eggscli -eggstests/eggstests +terncli/terncli +terntests/terntests bincodegen/bincodegen crc32csum/crc32csum -eggsgc/eggsgc +terngc/terngc badblocks/badblocks -eggsshuckleproxy/eggsshuckleproxy -eggss3/eggss3 +ternshuckleproxy/ternshuckleproxy +terns3/terns3 diff --git a/go/bincodegen/bincodegen.go b/go/bincodegen/bincodegen.go index 3cdbbb24..f939eefe 100644 --- a/go/bincodegen/bincodegen.go +++ b/go/bincodegen/bincodegen.go @@ -10,7 +10,7 @@ import ( "regexp" "strings" "unicode" - "xtx/eggsfs/msgs" + "xtx/ternfs/msgs" ) type subexpr struct { @@ -269,24 +269,24 @@ type reqRespType struct { } // Start from 10 to play nice with kernel drivers and such. -const eggsErrorCodeOffset = 10 +const ternErrorCodeOffset = 10 func generateGoErrorCodes(out io.Writer, errors []string) { fmt.Fprintf(out, "const (\n") for i, err := range errors { - fmt.Fprintf(out, "\t%s EggsError = %d\n", err, i+eggsErrorCodeOffset) + fmt.Fprintf(out, "\t%s TernError = %d\n", err, i+ternErrorCodeOffset) } fmt.Fprintf(out, ")\n") fmt.Fprintf(out, "\n") - fmt.Fprintf(out, "func (err EggsError) String() string {\n") + fmt.Fprintf(out, "func (err TernError) String() string {\n") fmt.Fprintf(out, "\tswitch err {\n") for i, err := range errors { - fmt.Fprintf(out, "\tcase %d:\n", i+eggsErrorCodeOffset) + fmt.Fprintf(out, "\tcase %d:\n", i+ternErrorCodeOffset) fmt.Fprintf(out, "\t\treturn \"%s\"\n", err) } fmt.Fprintf(out, "\tdefault:\n") - fmt.Fprintf(out, "\t\treturn fmt.Sprintf(\"EggsError(%%d)\", err)\n") + fmt.Fprintf(out, "\t\treturn fmt.Sprintf(\"TernError(%%d)\", err)\n") fmt.Fprintf(out, "\t}\n") fmt.Fprintf(out, "}\n\n") } @@ -331,15 +331,15 @@ func generateGo(errors []string, shardReqResps []reqRespType, cdcReqResps []reqR func generateKmodMsgKind(hOut io.Writer, cOut io.Writer, what string, reqResps []reqRespType) { for _, reqResp := range reqResps { - fmt.Fprintf(hOut, "#define EGGSFS_%s_%s 0x%X\n", what, reqRespEnum(reqResp), reqResp.kind) + fmt.Fprintf(hOut, "#define TERNFS_%s_%s 0x%X\n", what, reqRespEnum(reqResp), reqResp.kind) } - fmt.Fprintf(hOut, "#define __print_eggsfs_%s_kind(k) __print_symbolic(k", strings.ToLower(what)) + fmt.Fprintf(hOut, "#define __print_ternfs_%s_kind(k) __print_symbolic(k", strings.ToLower(what)) for _, reqResp := range reqResps { fmt.Fprintf(hOut, ", { %d, %q }", reqResp.kind, reqRespEnum(reqResp)) } fmt.Fprintf(hOut, ")\n") - fmt.Fprintf(hOut, "#define EGGSFS_%s_KIND_MAX %d\n", what, len(reqResps)) - fmt.Fprintf(hOut, "static const u8 __eggsfs_%s_kind_index_mappings[256] = {", strings.ToLower(what)) + fmt.Fprintf(hOut, "#define TERNFS_%s_KIND_MAX %d\n", what, len(reqResps)) + fmt.Fprintf(hOut, "static const u8 __ternfs_%s_kind_index_mappings[256] = {", strings.ToLower(what)) var idx [256]uint8 for k, _ := range idx { idx[k] = 0xff @@ -355,9 +355,9 @@ func generateKmodMsgKind(hOut io.Writer, cOut io.Writer, what string, reqResps [ } fmt.Fprintf(hOut, "};\n") - fmt.Fprintf(hOut, "const char* eggsfs_%s_kind_str(int kind);\n\n", strings.ToLower(what)) + fmt.Fprintf(hOut, "const char* ternfs_%s_kind_str(int kind);\n\n", strings.ToLower(what)) - fmt.Fprintf(cOut, "const char* eggsfs_%s_kind_str(int kind) {\n", strings.ToLower(what)) + fmt.Fprintf(cOut, "const char* ternfs_%s_kind_str(int kind) {\n", strings.ToLower(what)) fmt.Fprintf(cOut, " switch (kind) {\n") for _, reqResp := range reqResps { fmt.Fprintf(cOut, " case %d: return %q;\n", reqResp.kind, reqRespEnum(reqResp)) @@ -375,19 +375,19 @@ func kmodFieldName(s string) string { func kmodStructName(typ reflect.Type) string { re := regexp.MustCompile(`(.)([A-Z])`) s := strings.ToLower(string(re.ReplaceAll([]byte(typ.Name()), []byte("${1}_${2}")))) - return fmt.Sprintf("eggsfs_%s", s) + return fmt.Sprintf("ternfs_%s", s) } func kmodStaticSizeName(typ reflect.Type) string { re := regexp.MustCompile(`(.)([A-Z])`) s := strings.ToUpper(string(re.ReplaceAll([]byte(typ.Name()), []byte("${1}_${2}")))) - return fmt.Sprintf("EGGSFS_%s_SIZE", s) + return fmt.Sprintf("TERNFS_%s_SIZE", s) } func kmodMaxSizeName(typ reflect.Type) string { re := regexp.MustCompile(`(.)([A-Z])`) s := strings.ToUpper(string(re.ReplaceAll([]byte(typ.Name()), []byte("${1}_${2}")))) - return fmt.Sprintf("EGGSFS_%s_MAX_SIZE", s) + return fmt.Sprintf("TERNFS_%s_MAX_SIZE", s) } func generateKmodStructOpaque(w io.Writer, typ reflect.Type, suffix string) { @@ -477,7 +477,7 @@ func kmodType(t reflect.Type) string { case reflect.Slice, reflect.String: elem := sliceTypeElem(t) if elem.Kind() == reflect.Uint8 { - return "eggsfs_bytes*" + return "ternfs_bytes*" } else { return kmodType(elem) } @@ -518,17 +518,17 @@ func generateKmodGet(staticSizes map[string]int, h io.Writer, typ reflect.Type) } else if fldK == reflect.Slice || fldK == reflect.String { elemTyp := sliceTypeElem(fldTyp) if elemTyp.Kind() == reflect.Uint8 { - generateKmodStruct(h, typ, fldName, "struct eggsfs_bincode_bytes str") + generateKmodStruct(h, typ, fldName, "struct ternfs_bincode_bytes str") nextType := fmt.Sprintf("struct %s_%s", kmodStructName(typ), fldName) - fmt.Fprintf(h, "static inline void _%s_get_%s(struct eggsfs_bincode_get_ctx* ctx, %s* prev, %s* next) {\n", kmodStructName(typ), fldName, prevType, nextType) + fmt.Fprintf(h, "static inline void _%s_get_%s(struct ternfs_bincode_get_ctx* ctx, %s* prev, %s* next) {\n", kmodStructName(typ), fldName, prevType, nextType) fmt.Fprintf(h, " if (likely(ctx->err == 0)) {\n") fmt.Fprintf(h, " if (unlikely(ctx->end - ctx->buf < 1)) {\n") - fmt.Fprintf(h, " ctx->err = EGGSFS_ERR_MALFORMED_RESPONSE;\n") + fmt.Fprintf(h, " ctx->err = TERNFS_ERR_MALFORMED_RESPONSE;\n") fmt.Fprintf(h, " } else {\n") fmt.Fprintf(h, " next->str.len = *(u8*)(ctx->buf);\n") fmt.Fprintf(h, " ctx->buf++;\n") fmt.Fprintf(h, " if (unlikely(ctx->end - ctx->buf < next->str.len)) {\n") - fmt.Fprintf(h, " ctx->err = EGGSFS_ERR_MALFORMED_RESPONSE;\n") + fmt.Fprintf(h, " ctx->err = TERNFS_ERR_MALFORMED_RESPONSE;\n") fmt.Fprintf(h, " } else {\n") fmt.Fprintf(h, " next->str.buf = ctx->buf;\n") fmt.Fprintf(h, " ctx->buf += next->str.len;\n") @@ -543,10 +543,10 @@ func generateKmodGet(staticSizes map[string]int, h io.Writer, typ reflect.Type) } else { generateKmodStruct(h, typ, fldName, "u16 len") nextType := fmt.Sprintf("struct %s_%s", kmodStructName(typ), fldName) - fmt.Fprintf(h, "static inline void _%s_get_%s(struct eggsfs_bincode_get_ctx* ctx, %s* prev, %s* next) {\n", kmodStructName(typ), fldName, prevType, nextType) + fmt.Fprintf(h, "static inline void _%s_get_%s(struct ternfs_bincode_get_ctx* ctx, %s* prev, %s* next) {\n", kmodStructName(typ), fldName, prevType, nextType) fmt.Fprintf(h, " if (likely(ctx->err == 0)) {\n") fmt.Fprintf(h, " if (unlikely(ctx->end - ctx->buf < 2)) {\n") - fmt.Fprintf(h, " ctx->err = EGGSFS_ERR_MALFORMED_RESPONSE;\n") + fmt.Fprintf(h, " ctx->err = TERNFS_ERR_MALFORMED_RESPONSE;\n") fmt.Fprintf(h, " } else {\n") fmt.Fprintf(h, " next->len = get_unaligned_le16(ctx->buf);\n") fmt.Fprintf(h, " ctx->buf += 2;\n") @@ -563,10 +563,10 @@ func generateKmodGet(staticSizes map[string]int, h io.Writer, typ reflect.Type) } else if fldK == reflect.Bool || fldK == reflect.Uint8 || fldK == reflect.Uint16 || fldK == reflect.Uint32 || fldK == reflect.Uint64 { generateKmodStruct(h, typ, fldName, fmt.Sprintf("%s x", fldKTyp)) nextType := fmt.Sprintf("struct %s_%s", kmodStructName(typ), fldName) - fmt.Fprintf(h, "static inline void _%s_get_%s(struct eggsfs_bincode_get_ctx* ctx, %s* prev, %s* next) {\n", kmodStructName(typ), fldName, prevType, nextType) + fmt.Fprintf(h, "static inline void _%s_get_%s(struct ternfs_bincode_get_ctx* ctx, %s* prev, %s* next) {\n", kmodStructName(typ), fldName, prevType, nextType) fmt.Fprintf(h, " if (likely(ctx->err == 0)) {\n") fmt.Fprintf(h, " if (unlikely(ctx->end - ctx->buf < %d)) {\n", fldTyp.Size()) - fmt.Fprintf(h, " ctx->err = EGGSFS_ERR_MALFORMED_RESPONSE;\n") + fmt.Fprintf(h, " ctx->err = TERNFS_ERR_MALFORMED_RESPONSE;\n") fmt.Fprintf(h, " } else {\n") switch fldK { case reflect.Bool: @@ -596,9 +596,9 @@ func generateKmodGet(staticSizes map[string]int, h io.Writer, typ reflect.Type) fmt.Fprintf(h, "#define %s_get_end(ctx, prev, next) \\\n", kmodStructName(typ)) fmt.Fprintf(h, " { %s* __dummy __attribute__((unused)) = &(prev); }\\\n", prevType) fmt.Fprintf(h, " struct %s_end* next = NULL\n\n", kmodStructName(typ)) - fmt.Fprintf(h, "static inline void %s_get_finish(struct eggsfs_bincode_get_ctx* ctx, struct %s_end* end) {\n", kmodStructName(typ), kmodStructName(typ)) + fmt.Fprintf(h, "static inline void %s_get_finish(struct ternfs_bincode_get_ctx* ctx, struct %s_end* end) {\n", kmodStructName(typ), kmodStructName(typ)) fmt.Fprintf(h, " if (unlikely(ctx->buf != ctx->end)) {\n") - fmt.Fprintf(h, " ctx->err = EGGSFS_ERR_MALFORMED_RESPONSE;\n") + fmt.Fprintf(h, " ctx->err = TERNFS_ERR_MALFORMED_RESPONSE;\n") fmt.Fprintf(h, " }\n") fmt.Fprintf(h, "}\n\n") } @@ -635,7 +635,7 @@ func generateKmodPut(staticSizes map[string]int, h io.Writer, typ reflect.Type) elemTyp := sliceTypeElem(fldTyp) nextType := fmt.Sprintf("struct %s_%s", kmodStructName(typ), fldName) if elemTyp.Kind() == reflect.Uint8 { - fmt.Fprintf(h, "static inline void _%s_put_%s(struct eggsfs_bincode_put_ctx* ctx, %s* prev, %s* next, const char* str, int str_len) {\n", kmodStructName(typ), fldName, prevType, nextType) + fmt.Fprintf(h, "static inline void _%s_put_%s(struct ternfs_bincode_put_ctx* ctx, %s* prev, %s* next, const char* str, int str_len) {\n", kmodStructName(typ), fldName, prevType, nextType) fmt.Fprintf(h, " next = NULL;\n") fmt.Fprintf(h, " BUG_ON(str_len < 0 || str_len > 255);\n") fmt.Fprintf(h, " BUG_ON(ctx->end - ctx->cursor < (1 + str_len));\n") @@ -647,7 +647,7 @@ func generateKmodPut(staticSizes map[string]int, h io.Writer, typ reflect.Type) fmt.Fprintf(h, " struct %s_%s next; \\\n", kmodStructName(typ), fldName) fmt.Fprintf(h, " _%s_put_%s(ctx, &(prev), &(next), str, str_len)\n\n", kmodStructName(typ), fldName) } else { - fmt.Fprintf(h, "static inline void _%s_put_%s(struct eggsfs_bincode_put_ctx* ctx, %s* prev, %s* next, int len) {\n", kmodStructName(typ), fldName, prevType, nextType) + fmt.Fprintf(h, "static inline void _%s_put_%s(struct ternfs_bincode_put_ctx* ctx, %s* prev, %s* next, int len) {\n", kmodStructName(typ), fldName, prevType, nextType) fmt.Fprintf(h, " next = NULL;\n") fmt.Fprintf(h, " BUG_ON(len < 0 || len >= 1<<16);\n") fmt.Fprintf(h, " BUG_ON(ctx->end - ctx->cursor < 2);\n") @@ -661,7 +661,7 @@ func generateKmodPut(staticSizes map[string]int, h io.Writer, typ reflect.Type) prevType = nextType } else if fldK == reflect.Bool || fldK == reflect.Uint8 || fldK == reflect.Uint16 || fldK == reflect.Uint32 || fldK == reflect.Uint64 { nextType := fmt.Sprintf("struct %s_%s", kmodStructName(typ), fldName) - fmt.Fprintf(h, "static inline void _%s_put_%s(struct eggsfs_bincode_put_ctx* ctx, %s* prev, %s* next, %s x) {\n", kmodStructName(typ), fldName, prevType, nextType, fldKTyp) + fmt.Fprintf(h, "static inline void _%s_put_%s(struct ternfs_bincode_put_ctx* ctx, %s* prev, %s* next, %s x) {\n", kmodStructName(typ), fldName, prevType, nextType, fldKTyp) fmt.Fprintf(h, " next = NULL;\n") fmt.Fprintf(h, " BUG_ON(ctx->end - ctx->cursor < %d);\n", fldTyp.Size()) switch fldK { @@ -700,20 +700,20 @@ func generateKmod(errors []string, shardReqResps []reqRespType, cdcReqResps []re fmt.Fprintln(hOut) for i, err := range errors { - fmt.Fprintf(hOut, "#define EGGSFS_ERR_%s %d\n", err, eggsErrorCodeOffset+i) + fmt.Fprintf(hOut, "#define TERNFS_ERR_%s %d\n", err, ternErrorCodeOffset+i) } fmt.Fprintf(hOut, "\n") - fmt.Fprintf(hOut, "#define __print_eggsfs_err(i) __print_symbolic(i") + fmt.Fprintf(hOut, "#define __print_ternfs_err(i) __print_symbolic(i") for i, err := range errors { - fmt.Fprintf(hOut, ", { %d, %q }", eggsErrorCodeOffset+i, err) + fmt.Fprintf(hOut, ", { %d, %q }", ternErrorCodeOffset+i, err) } fmt.Fprintf(hOut, ")\n") - fmt.Fprintf(hOut, "const char* eggsfs_err_str(int err);\n\n") + fmt.Fprintf(hOut, "const char* ternfs_err_str(int err);\n\n") - fmt.Fprintf(cOut, "const char* eggsfs_err_str(int err) {\n") + fmt.Fprintf(cOut, "const char* ternfs_err_str(int err) {\n") fmt.Fprintf(cOut, " switch (err) {\n") for i, err := range errors { - fmt.Fprintf(cOut, " case %d: return %q;\n", eggsErrorCodeOffset+i, err) + fmt.Fprintf(cOut, " case %d: return %q;\n", ternErrorCodeOffset+i, err) } fmt.Fprintf(cOut, " default: return \"UNKNOWN\";\n") fmt.Fprintf(cOut, " }\n") @@ -755,10 +755,10 @@ func generateKmod(errors []string, shardReqResps []reqRespType, cdcReqResps []re func cppType(t reflect.Type) string { if t.Name() == "InodeId" || t.Name() == "InodeIdExtra" || t.Name() == "Parity" || - t.Name() == "EggsTime" || t.Name() == "ShardId" || t.Name() == "CDCMessageKind" || + t.Name() == "TernTime" || t.Name() == "ShardId" || t.Name() == "CDCMessageKind" || t.Name() == "Crc" || t.Name() == "BlockServiceId" || t.Name() == "ReplicaId" || t.Name() == "ShardReplicaId" || t.Name() == "LogIdx" || t.Name() == "LeaderToken" || - t.Name() == "Ip" || t.Name() == "IpPort" || t.Name() == "AddrsInfo" || t.Name() == "EggsError" { + t.Name() == "Ip" || t.Name() == "IpPort" || t.Name() == "AddrsInfo" || t.Name() == "TernError" { return t.Name() } if t.Name() == "Blob" { @@ -836,7 +836,7 @@ func (cg *cppCodegen) gen(expr *subexpr) { // we want InodeId/InodeIdExtra/Parity to be here because of some checks we perform // when unpacking if k == reflect.Struct || expr.typ.Name() == "InodeId" || expr.typ.Name() == "InodeIdExtra" || - expr.typ.Name() == "Parity" || expr.typ.Name() == "EggsTime" || expr.typ.Name() == "ShardId" || + expr.typ.Name() == "Parity" || expr.typ.Name() == "TernTime" || expr.typ.Name() == "ShardId" || expr.typ.Name() == "Crc" || expr.typ.Name() == "BlockServiceId" || expr.typ.Name() == "ReplicaId" || expr.typ.Name() == "ShardReplicaId" || expr.typ.Name() == "LogIdx" || expr.typ.Name() == "LeaderToken" || expr.typ.Name() == "Ip" || expr.typ.Name() == "IpPort" || expr.typ.Name() == "AddrsInfo" { @@ -870,7 +870,7 @@ func (cg *cppCodegen) gen(expr *subexpr) { switch k { case reflect.Bool, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: if expr.typ.Name() == "ShardId" || expr.typ.Name() == "InodeId" || expr.typ.Name() == "InodeIdExtra" || - expr.typ.Name() == "Parity" || expr.typ.Name() == "EggsTime" || expr.typ.Name() == "ShardReplicaId" || + expr.typ.Name() == "Parity" || expr.typ.Name() == "TernTime" || expr.typ.Name() == "ShardReplicaId" || expr.typ.Name() == "ReplicaId" || expr.typ.Name() == "LogIdx" || expr.typ.Name() == "LeaderToken" || expr.typ.Name() == "Ip" || expr.typ.Name() == "IpPort" || expr.typ.Name() == "AddrsInfo" { cg.cline(fmt.Sprintf("%s = %s()", expr.fld, cppType(expr.typ))) @@ -994,37 +994,37 @@ func generateCppSingle(hpp io.Writer, cpp io.Writer, t reflect.Type) { } func generateCppErr(hpp io.Writer, cpp io.Writer, errors []string) { - fmt.Fprintf(hpp, "enum class EggsError : uint16_t {\n") + fmt.Fprintf(hpp, "enum class TernError : uint16_t {\n") fmt.Fprintf(hpp, " NO_ERROR = 0,\n") for i, err := range errors { - fmt.Fprintf(hpp, " %s = %d,\n", err, eggsErrorCodeOffset+i) + fmt.Fprintf(hpp, " %s = %d,\n", err, ternErrorCodeOffset+i) } fmt.Fprintf(hpp, "};\n\n") - fmt.Fprintf(hpp, "std::ostream& operator<<(std::ostream& out, EggsError err);\n\n") - fmt.Fprintf(cpp, "std::ostream& operator<<(std::ostream& out, EggsError err) {\n") + fmt.Fprintf(hpp, "std::ostream& operator<<(std::ostream& out, TernError err);\n\n") + fmt.Fprintf(cpp, "std::ostream& operator<<(std::ostream& out, TernError err) {\n") fmt.Fprintf(cpp, " switch (err) {\n") - fmt.Fprintf(cpp, " case EggsError::NO_ERROR:\n") + fmt.Fprintf(cpp, " case TernError::NO_ERROR:\n") fmt.Fprintf(cpp, " out << \"NO_ERROR\";\n") fmt.Fprintf(cpp, " break;\n") for _, err := range errors { - fmt.Fprintf(cpp, " case EggsError::%s:\n", err) + fmt.Fprintf(cpp, " case TernError::%s:\n", err) fmt.Fprintf(cpp, " out << \"%s\";\n", err) fmt.Fprintf(cpp, " break;\n") } fmt.Fprintf(cpp, " default:\n") - fmt.Fprintf(cpp, " out << \"EggsError(\" << ((int)err) << \")\";\n") + fmt.Fprintf(cpp, " out << \"TernError(\" << ((int)err) << \")\";\n") fmt.Fprintf(cpp, " break;\n") fmt.Fprintf(cpp, " }\n") fmt.Fprintf(cpp, " return out;\n") fmt.Fprintf(cpp, "}\n\n") - fmt.Fprintf(hpp, "const std::vector allEggsErrors {\n") + fmt.Fprintf(hpp, "const std::vector allTernErrors {\n") for _, err := range errors { - fmt.Fprintf(hpp, " EggsError::%s,\n", err) + fmt.Fprintf(hpp, " TernError::%s,\n", err) } fmt.Fprintf(hpp, "};\n\n") - fmt.Fprintf(hpp, "constexpr int maxEggsError = %d;\n\n", eggsErrorCodeOffset+len(errors)) + fmt.Fprintf(hpp, "constexpr int maxTernError = %d;\n\n", ternErrorCodeOffset+len(errors)) } func generateCppKind(hpp io.Writer, cpp io.Writer, name string, reqResps []reqRespType) { @@ -1085,7 +1085,7 @@ func generateCppContainer(hpp io.Writer, cpp io.Writer, name string, kindTypeNam if i > 0 { fmt.Fprintf(hpp, ", ") } - if typ.typ.Name() == "EggsError" { + if typ.typ.Name() == "TernError" { fmt.Fprintf(hpp, "sizeof(%s)", cppType(typ.typ)) } else { fmt.Fprintf(hpp, "%s::STATIC_SIZE", cppType(typ.typ)) @@ -1155,7 +1155,7 @@ func generateCppContainer(hpp io.Writer, cpp io.Writer, name string, kindTypeNam fmt.Fprintf(cpp, " break;\n") } fmt.Fprintf(cpp, " default:\n") - fmt.Fprintf(cpp, " throw EGGS_EXCEPTION(\"bad %s kind %%s\", other.kind());\n", kindTypeName) + fmt.Fprintf(cpp, " throw TERN_EXCEPTION(\"bad %s kind %%s\", other.kind());\n", kindTypeName) fmt.Fprintf(cpp, " }\n") fmt.Fprintf(cpp, "}\n\n") @@ -1169,14 +1169,14 @@ func generateCppContainer(hpp io.Writer, cpp io.Writer, name string, kindTypeNam fmt.Fprintf(cpp, " switch (_kind) {\n") for i, typ := range types { fmt.Fprintf(cpp, " case %s::%s:\n", kindTypeName, typ.enum) - if typ.typ.Name() == "EggsError" { + if typ.typ.Name() == "TernError" { fmt.Fprintf(cpp, " return sizeof(%s) + sizeof(%s);\n", kindTypeName, cppType(typ.typ)) } else { fmt.Fprintf(cpp, " return sizeof(%s) + std::get<%d>(_data).packedSize();\n", kindTypeName, i) } } fmt.Fprintf(cpp, " default:\n") - fmt.Fprintf(cpp, " throw EGGS_EXCEPTION(\"bad %s kind %%s\", _kind);\n", kindTypeName) + fmt.Fprintf(cpp, " throw TERN_EXCEPTION(\"bad %s kind %%s\", _kind);\n", kindTypeName) fmt.Fprintf(cpp, " }\n") fmt.Fprintf(cpp, "}\n\n") @@ -1185,7 +1185,7 @@ func generateCppContainer(hpp io.Writer, cpp io.Writer, name string, kindTypeNam fmt.Fprintf(cpp, " switch (_kind) {\n") for i, typ := range types { fmt.Fprintf(cpp, " case %s::%s:\n", kindTypeName, typ.enum) - if typ.typ.Name() == "EggsError" { + if typ.typ.Name() == "TernError" { fmt.Fprintf(cpp, " buf.packScalar<%s>(std::get<%d>(_data));\n", cppType(typ.typ), i) } else { fmt.Fprintf(cpp, " std::get<%d>(_data).pack(buf);\n", i) @@ -1193,7 +1193,7 @@ func generateCppContainer(hpp io.Writer, cpp io.Writer, name string, kindTypeNam fmt.Fprintf(cpp, " break;\n") } fmt.Fprintf(cpp, " default:\n") - fmt.Fprintf(cpp, " throw EGGS_EXCEPTION(\"bad %s kind %%s\", _kind);\n", kindTypeName) + fmt.Fprintf(cpp, " throw TERN_EXCEPTION(\"bad %s kind %%s\", _kind);\n", kindTypeName) fmt.Fprintf(cpp, " }\n") fmt.Fprintf(cpp, "}\n\n") @@ -1202,7 +1202,7 @@ func generateCppContainer(hpp io.Writer, cpp io.Writer, name string, kindTypeNam fmt.Fprintf(cpp, " switch (_kind) {\n") for i, typ := range types { fmt.Fprintf(cpp, " case %s::%s:\n", kindTypeName, typ.enum) - if typ.typ.Name() == "EggsError" { + if typ.typ.Name() == "TernError" { fmt.Fprintf(cpp, " _data.emplace<%d>(buf.unpackScalar<%s>());\n", i, cppType(typ.typ)) } else { fmt.Fprintf(cpp, " _data.emplace<%d>().unpack(buf);\n", i) @@ -1238,7 +1238,7 @@ func generateCppContainer(hpp io.Writer, cpp io.Writer, name string, kindTypeNam fmt.Fprintf(cpp, " out << \"EMPTY\";\n") fmt.Fprintf(cpp, " break;\n") fmt.Fprintf(cpp, " default:\n") - fmt.Fprintf(cpp, " throw EGGS_EXCEPTION(\"bad %s kind %%s\", x.kind());\n", kindTypeName) + fmt.Fprintf(cpp, " throw TERN_EXCEPTION(\"bad %s kind %%s\", x.kind());\n", kindTypeName) fmt.Fprintf(cpp, " }\n") fmt.Fprintf(cpp, " return out;\n") fmt.Fprintf(cpp, "}\n\n") @@ -1293,7 +1293,7 @@ func generateCppReqResp(hpp io.Writer, cpp io.Writer, what string, reqResps []re } generateCppContainer(hpp, cpp, what+"ReqContainer", what+"MessageKind", reqContainerTypes) respContainerTypes := make([]containerType, len(reqResps)+1) - var errType msgs.EggsError + var errType msgs.TernError respContainerTypes[0] = containerType{ name: "Error", enum: "ERROR", diff --git a/go/bincodegen/msgs_bincode.go.header b/go/bincodegen/msgs_bincode.go.header index 4879d375..9984cbc7 100644 --- a/go/bincodegen/msgs_bincode.go.header +++ b/go/bincodegen/msgs_bincode.go.header @@ -5,7 +5,7 @@ package msgs import ( "fmt" "io" - "xtx/eggsfs/bincode" + "xtx/ternfs/bincode" ) // This file specifies @@ -54,20 +54,20 @@ func TagToDirInfoEntry(tag DirectoryInfoTag) IsDirectoryInfoEntry { } } -func (err EggsError) Error() string { +func (err TernError) Error() string { return err.String() } -func (err *EggsError) Pack(w io.Writer) error { +func (err *TernError) Pack(w io.Writer) error { return bincode.PackScalar(w, uint16(*err)) } -func (errCode *EggsError) Unpack(r io.Reader) error { +func (errCode *TernError) Unpack(r io.Reader) error { var c uint16 if err := bincode.UnpackScalar(r, &c); err != nil { return err } - *errCode = EggsError(c) + *errCode = TernError(c) return nil } diff --git a/go/certificate/blockscert.go b/go/certificate/blockscert.go index 79366444..43314916 100644 --- a/go/certificate/blockscert.go +++ b/go/certificate/blockscert.go @@ -4,8 +4,8 @@ import ( "bytes" "crypto/cipher" "encoding/binary" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) func BlockWriteCertificate(cipher cipher.Block, blockServiceId msgs.BlockServiceId, req *msgs.WriteBlockReq) [8]byte { diff --git a/go/cleanup/collectdirectories.go b/go/cleanup/collectdirectories.go index 6dca1078..068ea960 100644 --- a/go/cleanup/collectdirectories.go +++ b/go/cleanup/collectdirectories.go @@ -5,9 +5,9 @@ import ( "sync" "sync/atomic" "time" - "xtx/eggsfs/client" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/client" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) type CollectDirectoriesStats struct { diff --git a/go/cleanup/defrag.go b/go/cleanup/defrag.go index e2067b96..f2b8f3ac 100644 --- a/go/cleanup/defrag.go +++ b/go/cleanup/defrag.go @@ -8,11 +8,11 @@ import ( "path" "sync/atomic" "time" - "xtx/eggsfs/cleanup/scratch" - "xtx/eggsfs/client" - "xtx/eggsfs/crc32c" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/cleanup/scratch" + "xtx/ternfs/client" + "xtx/ternfs/crc32c" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) type DefragStats struct { @@ -189,7 +189,7 @@ func DefragFile( type DefragOptions struct { WorkersPerShard int - StartFrom msgs.EggsTime + StartFrom msgs.TernTime MinSpanSize uint32 StorageClass msgs.StorageClass // EMPTY = no filter } @@ -207,7 +207,7 @@ func DefragFiles( timeStats := newTimeStats() return client.Parwalk( log, c, &client.ParwalkOptions{WorkersPerShard: options.WorkersPerShard}, root, - func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.EggsTime, id msgs.InodeId, current bool, owned bool) error { + func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.TernTime, id msgs.InodeId, current bool, owned bool) error { if id.Type() == msgs.DIRECTORY { return nil } @@ -322,7 +322,7 @@ func DefragSpans( timeStats := newTimeStats() return client.Parwalk( log, c, &client.ParwalkOptions{WorkersPerShard: 5}, root, - func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.EggsTime, id msgs.InodeId, current bool, owned bool) error { + func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.TernTime, id msgs.InodeId, current bool, owned bool) error { if id.Type() == msgs.DIRECTORY { return nil } diff --git a/go/cleanup/destructfiles.go b/go/cleanup/destructfiles.go index 8e35163a..bd017884 100644 --- a/go/cleanup/destructfiles.go +++ b/go/cleanup/destructfiles.go @@ -4,9 +4,9 @@ import ( "fmt" "sync" "sync/atomic" - "xtx/eggsfs/client" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/client" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) type DestructFilesStats struct { @@ -37,7 +37,7 @@ func DestructFile( c *client.Client, stats *DestructFilesStats, id msgs.InodeId, - deadline msgs.EggsTime, + deadline msgs.TernTime, cookie [8]byte, ) error { log.Debug("%v: destructing file, cookie=%v", id, cookie) @@ -77,7 +77,7 @@ func DestructFile( var proof [8]byte for i := range initResp.Blocks { block := &initResp.Blocks[i] - if block.BlockServiceFlags.HasAny(msgs.EGGSFS_BLOCK_SERVICE_DECOMMISSIONED) { + if block.BlockServiceFlags.HasAny(msgs.TERNFS_BLOCK_SERVICE_DECOMMISSIONED) { proof, err = c.EraseDecommissionedBlock(block) if err != nil { return err @@ -86,7 +86,7 @@ func DestructFile( // There's no point trying to erase blocks for stale block services -- they're // almost certainly temporarly offline, and we'll be stuck forever since in GC we run // with infinite timeout. Just skip. - if block.BlockServiceFlags.HasAny(msgs.EGGSFS_BLOCK_SERVICE_STALE) { + if block.BlockServiceFlags.HasAny(msgs.TERNFS_BLOCK_SERVICE_STALE) { log.Debug("skipping block %v in file %v since its block service %v is stale", block.BlockId, id, block.BlockServiceId) couldNotReachBlockServices = append(couldNotReachBlockServices, block.BlockServiceId) continue @@ -133,7 +133,7 @@ func DestructFile( type destructFileRequest struct { id msgs.InodeId - deadline msgs.EggsTime + deadline msgs.TernTime cookie [8]byte } diff --git a/go/cleanup/migrate.go b/go/cleanup/migrate.go index 3cfd34a6..7586cb39 100644 --- a/go/cleanup/migrate.go +++ b/go/cleanup/migrate.go @@ -21,12 +21,12 @@ import ( "sync" "sync/atomic" "time" - "xtx/eggsfs/cleanup/scratch" - "xtx/eggsfs/client" - "xtx/eggsfs/crc32c" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" - "xtx/eggsfs/rs" + "xtx/ternfs/cleanup/scratch" + "xtx/ternfs/client" + "xtx/ternfs/crc32c" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" + "xtx/ternfs/rs" ) type MigrateStats struct { @@ -660,7 +660,7 @@ OUT: continue } - if bs.Flags.HasAny(msgs.EGGSFS_BLOCK_SERVICE_DECOMMISSIONED) && bs.HasFiles { + if bs.Flags.HasAny(msgs.TERNFS_BLOCK_SERVICE_DECOMMISSIONED) && bs.HasFiles { m.ScheduleBlockService(bs.Id) } else { m.blockServicesLock.Lock() diff --git a/go/cleanup/policy.go b/go/cleanup/policy.go index ba21554f..1c74adfe 100644 --- a/go/cleanup/policy.go +++ b/go/cleanup/policy.go @@ -3,7 +3,7 @@ package cleanup import ( "fmt" "time" - "xtx/eggsfs/msgs" + "xtx/ternfs/msgs" ) // Returns how many edges to remove according to the policy (as a prefix of the input). @@ -14,7 +14,7 @@ import ( // It is assumed that every delete in the input will be be preceeded by a non-delete. // // If it returns N, edges[N:] will be well formed too in the sense above. -func edgesToRemove(dir msgs.InodeId, policy *msgs.SnapshotPolicy, now msgs.EggsTime, edges []msgs.Edge, minEdgeAge time.Duration) int { +func edgesToRemove(dir msgs.InodeId, policy *msgs.SnapshotPolicy, now msgs.TernTime, edges []msgs.Edge, minEdgeAge time.Duration) int { if len(edges) == 0 { return 0 } diff --git a/go/cleanup/policy_test.go b/go/cleanup/policy_test.go index de00c648..3abd0dde 100644 --- a/go/cleanup/policy_test.go +++ b/go/cleanup/policy_test.go @@ -3,16 +3,16 @@ package cleanup import ( "testing" "time" - "xtx/eggsfs/assert" - "xtx/eggsfs/msgs" + "xtx/ternfs/assert" + "xtx/ternfs/msgs" ) func inodeId(id uint64, extra bool) msgs.InodeIdExtra { return msgs.MakeInodeIdExtra(msgs.MakeInodeId(msgs.FILE, 0, id), extra) } -func date(day int) msgs.EggsTime { - return msgs.MakeEggsTime(time.Date(2021, time.January, day, 0, 0, 0, 0, time.UTC)) +func date(day int) msgs.TernTime { + return msgs.MakeTernTime(time.Date(2021, time.January, day, 0, 0, 0, 0, time.UTC)) } func TestDeleteAll(t *testing.T) { diff --git a/go/cleanup/scratch/scratch.go b/go/cleanup/scratch/scratch.go index 1393cdba..51983cb9 100644 --- a/go/cleanup/scratch/scratch.go +++ b/go/cleanup/scratch/scratch.go @@ -4,9 +4,9 @@ import ( "fmt" "sync" "time" - "xtx/eggsfs/client" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/client" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) type ScratchFile interface { @@ -156,7 +156,7 @@ func (s *scratchFile) Lock() (*lockedScratchFile, error) { s.id = resp.Id s.cookie = resp.Cookie s.size = 0 - s.deadline = msgs.MakeEggsTime(time.Now().Add(3 * time.Hour)) + s.deadline = msgs.MakeTernTime(time.Now().Add(3 * time.Hour)) } return &lockedScratchFile{s, true}, nil @@ -193,7 +193,7 @@ type scratchFile struct { clearOnUnlock bool clearReason string - deadline msgs.EggsTime + deadline msgs.TernTime done chan struct{} mu sync.Mutex diff --git a/go/cleanup/scrub.go b/go/cleanup/scrub.go index c5b0261d..7e6dd797 100644 --- a/go/cleanup/scrub.go +++ b/go/cleanup/scrub.go @@ -5,10 +5,10 @@ import ( "sync" "sync/atomic" "time" - "xtx/eggsfs/cleanup/scratch" - "xtx/eggsfs/client" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/cleanup/scratch" + "xtx/ternfs/client" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) type ScrubState struct { @@ -102,7 +102,7 @@ func scrubWorker( rateLimit.Acquire() } atomic.StoreUint64(&stats.WorkersQueuesSize[shid], uint64(len(workerChan))) - if req.blockService.Flags.HasAny(msgs.EGGSFS_BLOCK_SERVICE_DECOMMISSIONED) { + if req.blockService.Flags.HasAny(msgs.TERNFS_BLOCK_SERVICE_DECOMMISSIONED) { atomic.AddUint64(&stats.DecommissionedBlocks, 1) continue } diff --git a/go/cleanup/zeroblockservicefiles.go b/go/cleanup/zeroblockservicefiles.go index 730541de..3a22523f 100644 --- a/go/cleanup/zeroblockservicefiles.go +++ b/go/cleanup/zeroblockservicefiles.go @@ -3,9 +3,9 @@ package cleanup import ( "fmt" "sync/atomic" - "xtx/eggsfs/client" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/client" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) type ZeroBlockServiceFilesStats struct { diff --git a/go/client/blocksreq.go b/go/client/blocksreq.go index ae83fe3b..45f0a2ba 100644 --- a/go/client/blocksreq.go +++ b/go/client/blocksreq.go @@ -6,8 +6,8 @@ import ( "io" "math/rand" "net" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) // A low-level utility for directly communication with block services. @@ -88,7 +88,7 @@ func readBlocksResponse( log.Info("could not read error: %v", err) return err } - return msgs.EggsError(err) + return msgs.TernError(err) } kind := msgs.BlocksMessageKind(kindByte[0]) if kind != resp.BlocksResponseKind() { diff --git a/go/client/cdcreq.go b/go/client/cdcreq.go index 7ad03a1d..fd2588d6 100644 --- a/go/client/cdcreq.go +++ b/go/client/cdcreq.go @@ -3,8 +3,8 @@ package client import ( "fmt" "net" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) func (c *Client) checkRepeatedCDCRequestError( @@ -12,8 +12,8 @@ func (c *Client) checkRepeatedCDCRequestError( // these are already filled in by now reqBody msgs.CDCRequest, resp msgs.CDCResponse, - respErr msgs.EggsError, -) msgs.EggsError { + respErr msgs.TernError, +) msgs.TernError { switch reqBody := reqBody.(type) { case *msgs.RenameDirectoryReq: // We repeat the request, but the previous had actually gone through: diff --git a/go/client/client.go b/go/client/client.go index c165a599..8c3eac9b 100644 --- a/go/client/client.go +++ b/go/client/client.go @@ -1,8 +1,8 @@ -// The client package provides a interface to the eggsfs cluster that should be +// The client package provides a interface to the ternfs cluster that should be // sufficient for most user level operations (modifying metadata, reading and // writing blocks, etc). // -// To use this library you still require an understanding of the way eggsfs +// To use this library you still require an understanding of the way ternfs // operations, and you will still need to construct the request and reply // messages (defined in the [msgs] package), but all service discovery and // network communication is handled for you. @@ -25,10 +25,10 @@ import ( "syscall" "time" "unsafe" - "xtx/eggsfs/bincode" - "xtx/eggsfs/crc32c" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/bincode" + "xtx/ternfs/crc32c" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) type ReqCounters struct { @@ -396,7 +396,7 @@ func (cm *clientMetadata) parseResponse(log *lib.Logger, req *metadataProcessorR log.RaiseAlert("bad error response length %v, expected %v", rawResp.respLen, 4+8+1+2) err = msgs.MALFORMED_RESPONSE } else { - err = msgs.EggsError(binary.LittleEndian.Uint16((*rawResp.buf)[4+8+1:])) + err = msgs.TernError(binary.LittleEndian.Uint16((*rawResp.buf)[4+8+1:])) } req.respCh <- &metadataProcessorResponse{ requestId: req.requestId, @@ -816,13 +816,13 @@ func (proc *blocksProcessor) processResponses(log *lib.Logger) { } if resp.resp.resp.BlocksResponseKind() == msgs.FETCH_BLOCK_WITH_CRC { req := resp.resp.req.(*msgs.FetchBlockWithCrcReq) - pageCount := (req.Count / msgs.EGGS_PAGE_SIZE) + pageCount := (req.Count / msgs.TERN_PAGE_SIZE) log.Debug("reading block body from %v->%v", connr.LocalAddr(), connr.RemoteAddr()) - var page [msgs.EGGS_PAGE_WITH_CRC_SIZE]byte + var page [msgs.TERN_PAGE_WITH_CRC_SIZE]byte pageFailed := false for i := uint32(0); i < pageCount; i++ { bytesRead := uint32(0) - for bytesRead < msgs.EGGS_PAGE_WITH_CRC_SIZE { + for bytesRead < msgs.TERN_PAGE_WITH_CRC_SIZE { read, err := connr.Read(page[bytesRead:]) if err != nil { resp.resp.done(log, &proc.addr1, &proc.addr2, resp.resp.extra, err) @@ -834,16 +834,16 @@ func (proc *blocksProcessor) processResponses(log *lib.Logger) { if pageFailed { break } - crc := binary.LittleEndian.Uint32(page[msgs.EGGS_PAGE_SIZE:]) - actualCrc := crc32c.Sum(0, page[:msgs.EGGS_PAGE_SIZE]) + crc := binary.LittleEndian.Uint32(page[msgs.TERN_PAGE_SIZE:]) + actualCrc := crc32c.Sum(0, page[:msgs.TERN_PAGE_SIZE]) if crc != actualCrc { resp.resp.done(log, &proc.addr1, &proc.addr2, resp.resp.extra, msgs.BAD_BLOCK_CRC) pageFailed = true break } - readBytes, err := resp.resp.additionalBodyWriter.ReadFrom(bytes.NewReader(page[:msgs.EGGS_PAGE_SIZE])) - if err != nil || uint32(readBytes) < msgs.EGGS_PAGE_SIZE { + readBytes, err := resp.resp.additionalBodyWriter.ReadFrom(bytes.NewReader(page[:msgs.TERN_PAGE_SIZE])) + if err != nil || uint32(readBytes) < msgs.TERN_PAGE_SIZE { if err == nil { err = io.EOF } @@ -1341,7 +1341,7 @@ TraverseDirectories: } // High-level helper function to take a string path and return the inode and parent inode -func (c *Client) ResolvePathWithParent(log *lib.Logger, path string) (id msgs.InodeId, creationTime msgs.EggsTime, parent msgs.InodeId, err error) { +func (c *Client) ResolvePathWithParent(log *lib.Logger, path string) (id msgs.InodeId, creationTime msgs.TernTime, parent msgs.InodeId, err error) { if !filepath.IsAbs(path) { return msgs.NULL_INODE_ID, 0, msgs.NULL_INODE_ID, fmt.Errorf("expected absolute path, got '%v'", path) } diff --git a/go/client/dirinfocache.go b/go/client/dirinfocache.go index 938f82e0..8ce3e147 100644 --- a/go/client/dirinfocache.go +++ b/go/client/dirinfocache.go @@ -8,7 +8,7 @@ import ( "sync/atomic" "time" "unsafe" - "xtx/eggsfs/msgs" + "xtx/ternfs/msgs" ) type dirInfoKey struct { @@ -29,7 +29,7 @@ type dirInfoCacheLruSlot struct { type dirInfoCacheInheritedFrom struct { id msgs.InodeId - cachedAt msgs.EggsTime + cachedAt msgs.TernTime lruSlot int32 } @@ -110,7 +110,7 @@ type dirInfoCacheBucket struct { } type cachedDirInfoEntry struct { - cachedAt msgs.EggsTime + cachedAt msgs.TernTime entry msgs.IsDirectoryInfoEntry packed [256]byte // for easy comparison } diff --git a/go/client/dirinfocache_test.go b/go/client/dirinfocache_test.go index 5b8abb15..1bdf189c 100644 --- a/go/client/dirinfocache_test.go +++ b/go/client/dirinfocache_test.go @@ -2,8 +2,8 @@ package client import ( "testing" - "xtx/eggsfs/assert" - "xtx/eggsfs/msgs" + "xtx/ternfs/assert" + "xtx/ternfs/msgs" ) func TestLRU(t *testing.T) { diff --git a/go/client/metadatareq.go b/go/client/metadatareq.go index 3d8ec7ba..b6589d03 100644 --- a/go/client/metadatareq.go +++ b/go/client/metadatareq.go @@ -3,9 +3,9 @@ package client import ( "sync/atomic" "time" - "xtx/eggsfs/bincode" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/bincode" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) // Starts from 1, we use 0 as a placeholder in `requestIds` @@ -71,23 +71,23 @@ func (c *Client) metadataRequest( counters.Timings.Add(elapsed) } // If we're past the first attempt, there are cases where errors are not what they seem. - var eggsError msgs.EggsError + var ternError msgs.TernError if resp.err != nil { - var isEggsError bool - eggsError, isEggsError = resp.err.(msgs.EggsError) - if isEggsError && attempts > 0 { + var isTernError bool + ternError, isTernError = resp.err.(msgs.TernError) + if isTernError && attempts > 0 { if shid >= 0 { - eggsError = c.checkRepeatedShardRequestError(log, reqBody.(msgs.ShardRequest), respBody.(msgs.ShardResponse), eggsError) + ternError = c.checkRepeatedShardRequestError(log, reqBody.(msgs.ShardRequest), respBody.(msgs.ShardResponse), ternError) } else { - eggsError = c.checkRepeatedCDCRequestError(log, reqBody.(msgs.CDCRequest), respBody.(msgs.CDCResponse), eggsError) + ternError = c.checkRepeatedCDCRequestError(log, reqBody.(msgs.CDCRequest), respBody.(msgs.CDCResponse), ternError) } } } // Check if it's an error or not. We only use debug here because some errors are legitimate // responses (e.g. FILE_EMPTY) - if eggsError != 0 { - log.DebugStack(1, "got error %v for req %T id %v from shard %v (took %v)", eggsError, reqBody, requestId, shid, elapsed) - return eggsError + if ternError != 0 { + log.DebugStack(1, "got error %v for req %T id %v from shard %v (took %v)", ternError, reqBody, requestId, shid, elapsed) + return ternError } log.Debug("got response %T from shard %v (took %v)", respBody, shid, elapsed) log.Trace("respBody %+v", respBody) diff --git a/go/client/parwalk.go b/go/client/parwalk.go index ffa4372b..1e1f1770 100644 --- a/go/client/parwalk.go +++ b/go/client/parwalk.go @@ -8,8 +8,8 @@ import ( "fmt" "path" "sync" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) type parwarlkReq struct { @@ -22,7 +22,7 @@ type parwalkEnv struct { chans []chan parwarlkReq client *Client snapshot bool - callback func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.EggsTime, id msgs.InodeId, current bool, owned bool) error + callback func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.TernTime, id msgs.InodeId, current bool, owned bool) error } func (env *parwalkEnv) visit( @@ -31,7 +31,7 @@ func (env *parwalkEnv) visit( parent msgs.InodeId, parentPath string, name string, - creationTime msgs.EggsTime, + creationTime msgs.TernTime, id msgs.InodeId, current bool, owned bool, @@ -136,7 +136,7 @@ func Parwalk( client *Client, options *ParwalkOptions, root string, - callback func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.EggsTime, id msgs.InodeId, current bool, owned bool) error, + callback func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.TernTime, id msgs.InodeId, current bool, owned bool) error, ) error { if options.WorkersPerShard < 1 { panic(fmt.Errorf("workersPerShard=%d < 1", options.WorkersPerShard)) diff --git a/go/client/shardreq.go b/go/client/shardreq.go index 71f9fa6e..0aafca1c 100644 --- a/go/client/shardreq.go +++ b/go/client/shardreq.go @@ -3,8 +3,8 @@ package client import ( "fmt" "net" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) func (c *Client) checkDeletedEdge( @@ -12,7 +12,7 @@ func (c *Client) checkDeletedEdge( dirId msgs.InodeId, targetId msgs.InodeId, name string, - creationTime msgs.EggsTime, + creationTime msgs.TernTime, owned bool, ) bool { // First we check the edge we expect to have moved away @@ -50,7 +50,7 @@ func (c *Client) checkNewEdgeAfterRename( dirId msgs.InodeId, targetId msgs.InodeId, name string, - creationTime *msgs.EggsTime, + creationTime *msgs.TernTime, ) bool { // Then we check the target edge lookupResp := msgs.LookupResp{} @@ -72,8 +72,8 @@ func (c *Client) checkRepeatedShardRequestError( // these are already filled in by now reqBody msgs.ShardRequest, resp msgs.ShardResponse, - respErr msgs.EggsError, -) msgs.EggsError { + respErr msgs.TernError, +) msgs.TernError { switch reqBody := reqBody.(type) { case *msgs.SameDirectoryRenameReq: if respErr == msgs.EDGE_NOT_FOUND { diff --git a/go/client/shucklereq.go b/go/client/shucklereq.go index 4cc509e8..b5e7f88a 100644 --- a/go/client/shucklereq.go +++ b/go/client/shucklereq.go @@ -8,9 +8,9 @@ import ( "os" "syscall" "time" - "xtx/eggsfs/bincode" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/bincode" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) func writeShuckleRequest(log *lib.Logger, w io.Writer, req msgs.ShuckleRequest) error { @@ -58,7 +58,7 @@ func readShuckleResponse( if err := binary.Read(r, binary.LittleEndian, &err); err != nil { return nil, fmt.Errorf("could not read error: %w", err) } - return nil, msgs.EggsError(err) + return nil, msgs.TernError(err) } kind := msgs.ShuckleMessageKind(data[0]) var resp msgs.ShuckleResponse @@ -244,7 +244,7 @@ func (c *ShuckleConn) requestHandler() { conn.SetReadDeadline(reqDeadline) resp, err := readShuckleResponse(c.log, conn) if err != nil { - if _, isEggsErr := err.(msgs.EggsError); !isEggsErr { + if _, isTernErr := err.(msgs.TernError); !isTernErr { conn.Close() conn = nil if netErr, ok := err.(net.Error); ok && netErr.Timeout() { diff --git a/go/client/span.go b/go/client/span.go index 45a09579..f8295cf5 100644 --- a/go/client/span.go +++ b/go/client/span.go @@ -8,10 +8,10 @@ import ( "path/filepath" "sort" "sync" - "xtx/eggsfs/crc32c" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" - "xtx/eggsfs/rs" + "xtx/ternfs/crc32c" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" + "xtx/ternfs/rs" ) type blockReader struct { @@ -94,8 +94,6 @@ func ensureLen(buf *[]byte, l int) { } } -const eggsFsPageSize int = 4096 - type SpanParameters struct { Parity rs.Parity StorageClass msgs.StorageClass @@ -116,7 +114,7 @@ func ComputeSpanParameters( blockSize := (int(spanSize) + D - 1) / D cellSize := (blockSize + S - 1) / S // Round up cell to page size - cellSize = eggsFsPageSize * ((cellSize + eggsFsPageSize - 1) / eggsFsPageSize) + cellSize = int(msgs.TERN_PAGE_SIZE) * ((cellSize + int(msgs.TERN_PAGE_SIZE) - 1) / int(msgs.TERN_PAGE_SIZE)) blockSize = cellSize * S storageClass := blockPolicies.Pick(uint32(blockSize)).StorageClass return &SpanParameters{ diff --git a/go/client/waitshuckle.go b/go/client/waitshuckle.go index e8cc7871..76df0d3d 100644 --- a/go/client/waitshuckle.go +++ b/go/client/waitshuckle.go @@ -3,8 +3,8 @@ package client import ( "fmt" "time" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) func WaitForBlockServices(ll *lib.Logger, shuckleAddress string, expectedBlockServices int, waitCurrentServicesCalcuation bool, timeout time.Duration) []msgs.BlockServiceDeprecatedInfo { diff --git a/go/crc32c/crc32c_test.go b/go/crc32c/crc32c_test.go index f78a4eb4..c167cc22 100644 --- a/go/crc32c/crc32c_test.go +++ b/go/crc32c/crc32c_test.go @@ -3,7 +3,7 @@ package crc32c import ( "math/rand" "testing" - "xtx/eggsfs/assert" + "xtx/ternfs/assert" ) func TestBasic(t *testing.T) { diff --git a/go/crc32csum/crc32csum.go b/go/crc32csum/crc32csum.go index 18fceffb..c12be1e5 100644 --- a/go/crc32csum/crc32csum.go +++ b/go/crc32csum/crc32csum.go @@ -4,8 +4,8 @@ import ( "fmt" "io" "os" - "xtx/eggsfs/crc32c" - "xtx/eggsfs/msgs" + "xtx/ternfs/crc32c" + "xtx/ternfs/msgs" ) var buf []byte diff --git a/go/go.mod b/go/go.mod index ff89b8b3..fec4f55d 100644 --- a/go/go.mod +++ b/go/go.mod @@ -1,4 +1,4 @@ -module xtx/eggsfs +module xtx/ternfs go 1.22 diff --git a/go/lib/blocks.go b/go/lib/blocks.go index 4691609d..63636c60 100644 --- a/go/lib/blocks.go +++ b/go/lib/blocks.go @@ -2,7 +2,7 @@ package lib import ( "encoding/binary" - "xtx/eggsfs/msgs" + "xtx/ternfs/msgs" ) func BlockServiceIdFromKey(secretKey [16]byte) msgs.BlockServiceId { diff --git a/go/lib/cbcmac_test.go b/go/lib/cbcmac_test.go index 680dc7fb..b23a5189 100644 --- a/go/lib/cbcmac_test.go +++ b/go/lib/cbcmac_test.go @@ -3,7 +3,7 @@ package lib import ( "crypto/aes" "testing" - "xtx/eggsfs/assert" + "xtx/ternfs/assert" ) // Sanity check to ensure that the block cipher we're using is the correct one diff --git a/go/lib/log.go b/go/lib/log.go index c2185b0c..a71a7177 100644 --- a/go/lib/log.go +++ b/go/lib/log.go @@ -306,7 +306,7 @@ func (l *Logger) RaiseHardwareEvent(failureDomain string, blockServiceID string, Hostname: fmt.Sprintf("%REDACTED", failureDomain), Timestamp: time.Now(), Component: DiskComponent, - Location: "EggsFS", + Location: "TernFS", Message: string(msgData), } err = l.heClient.SendHardwareEvent(evt) diff --git a/go/lib/timeouts.go b/go/lib/timeouts.go index 652c5c85..7e9a55f5 100644 --- a/go/lib/timeouts.go +++ b/go/lib/timeouts.go @@ -3,7 +3,7 @@ package lib import ( "fmt" "time" - "xtx/eggsfs/wyhash" + "xtx/ternfs/wyhash" ) type ReqTimeouts struct { diff --git a/go/lib/timings_test.go b/go/lib/timings_test.go index 8d0dcf18..5cc9fd26 100644 --- a/go/lib/timings_test.go +++ b/go/lib/timings_test.go @@ -4,7 +4,7 @@ import ( "math" "testing" "time" - "xtx/eggsfs/assert" + "xtx/ternfs/assert" ) func TestTimingsBins(t *testing.T) { diff --git a/go/managedprocess/managedprocess.go b/go/managedprocess/managedprocess.go index 88c4c59f..728e9aba 100644 --- a/go/managedprocess/managedprocess.go +++ b/go/managedprocess/managedprocess.go @@ -17,8 +17,8 @@ import ( "sync/atomic" "syscall" "time" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) func goDir(repoDir string) string { @@ -371,7 +371,7 @@ func (procs *ManagedProcesses) StartFuse(ll *lib.Logger, opts *FuseOpts) string } args = append(args, mountPoint) procs.Start(ll, &ManagedProcessArgs{ - Name: "eggsfuse", + Name: "ternfuse", Exe: opts.Exe, Args: args, StdoutFile: path.Join(opts.Path, "stdout"), @@ -379,7 +379,7 @@ func (procs *ManagedProcesses) StartFuse(ll *lib.Logger, opts *FuseOpts) string TerminateOnExit: true, }) if opts.Wait { - ll.Info("waiting for eggsfuse") + ll.Info("waiting for ternfuse") <-signalChan signal.Stop(signalChan) } @@ -483,7 +483,7 @@ type GoExes struct { } func BuildGoExes(ll *lib.Logger, repoDir string, race bool) *GoExes { - args := []string{"eggsshuckle", "eggsblocks", "eggsfuse"} + args := []string{"ternshuckle", "ternblocks", "ternfuse"} if race { args = append(args, "--race") } @@ -496,10 +496,10 @@ func BuildGoExes(ll *lib.Logger, repoDir string, race bool) *GoExes { panic(fmt.Errorf("could not build shucke/blocks/fuse: %w", err)) } return &GoExes{ - ShuckleExe: path.Join(goDir(repoDir), "eggsshuckle", "eggsshuckle"), - BlocksExe: path.Join(goDir(repoDir), "eggsblocks", "eggsblocks"), - FuseExe: path.Join(goDir(repoDir), "eggsfuse", "eggsfuse"), - ShuckleProxyExe: path.Join(goDir(repoDir), "eggsshuckleproxy", "eggsshuckleproxy"), + ShuckleExe: path.Join(goDir(repoDir), "ternshuckle", "ternshuckle"), + BlocksExe: path.Join(goDir(repoDir), "ternblocks", "ternblocks"), + FuseExe: path.Join(goDir(repoDir), "ternfuse", "ternfuse"), + ShuckleProxyExe: path.Join(goDir(repoDir), "ternshuckleproxy", "ternshuckleproxy"), } } @@ -716,10 +716,10 @@ type CppExes struct { } func BuildCppExes(ll *lib.Logger, repoDir string, buildType string) *CppExes { - buildDir := buildCpp(ll, repoDir, buildType, []string{"shard/eggsshard", "cdc/eggscdc", "dbtools/eggsdbtools"}) + buildDir := buildCpp(ll, repoDir, buildType, []string{"shard/ternshard", "cdc/terncdc", "dbtools/terndbtools"}) return &CppExes{ - ShardExe: path.Join(buildDir, "shard/eggsshard"), - CDCExe: path.Join(buildDir, "cdc/eggscdc"), - DBToolsExe: path.Join(buildDir, "dbtools/eggsdbtools"), + ShardExe: path.Join(buildDir, "shard/ternshard"), + CDCExe: path.Join(buildDir, "cdc/terncdc"), + DBToolsExe: path.Join(buildDir, "dbtools/terndbtools"), } } diff --git a/go/msgs/msgs.go b/go/msgs/msgs.go index e2bd2650..4a894291 100644 --- a/go/msgs/msgs.go +++ b/go/msgs/msgs.go @@ -12,8 +12,8 @@ import ( "strconv" "strings" "time" - "xtx/eggsfs/bincode" - "xtx/eggsfs/rs" + "xtx/ternfs/bincode" + "xtx/ternfs/rs" ) //go:generate go run ../bincodegen @@ -31,7 +31,7 @@ type ShardId uint8 type ReplicaId uint8 type ShardReplicaId uint16 type StorageClass uint8 -type EggsTime uint64 +type TernTime uint64 type BlockId uint64 type BlockServiceId uint64 type BlockServiceFlags uint8 @@ -92,31 +92,31 @@ const LOG_RESP_PROTOCOL_VERSION uint32 = 0x1474f4c const ERROR_KIND uint8 = 0 const ( - EGGSFS_BLOCK_SERVICE_EMPTY BlockServiceFlags = 0x0 - EGGSFS_BLOCK_SERVICE_STALE BlockServiceFlags = 0x1 - EGGSFS_BLOCK_SERVICE_NO_READ BlockServiceFlags = 0x2 - EGGSFS_BLOCK_SERVICE_NO_WRITE BlockServiceFlags = 0x4 - EGGSFS_BLOCK_SERVICE_DECOMMISSIONED BlockServiceFlags = 0x8 - EGGSFS_BLOCK_SERVICE_MASK_ALL = 0xf + TERNFS_BLOCK_SERVICE_EMPTY BlockServiceFlags = 0x0 + TERNFS_BLOCK_SERVICE_STALE BlockServiceFlags = 0x1 + TERNFS_BLOCK_SERVICE_NO_READ BlockServiceFlags = 0x2 + TERNFS_BLOCK_SERVICE_NO_WRITE BlockServiceFlags = 0x4 + TERNFS_BLOCK_SERVICE_DECOMMISSIONED BlockServiceFlags = 0x8 + TERNFS_BLOCK_SERVICE_MASK_ALL = 0xf ) const ( - EGGS_PAGE_SIZE uint32 = 1 << 12 // 4KB - EGGS_PAGE_WITH_CRC_SIZE uint32 = EGGS_PAGE_SIZE + 4 + TERN_PAGE_SIZE uint32 = 1 << 12 // 4KB + TERN_PAGE_WITH_CRC_SIZE uint32 = TERN_PAGE_SIZE + 4 ) func BlockServiceFlagFromName(n string) (BlockServiceFlags, error) { switch n { case "0": - return EGGSFS_BLOCK_SERVICE_EMPTY, nil + return TERNFS_BLOCK_SERVICE_EMPTY, nil case "STALE": - return EGGSFS_BLOCK_SERVICE_STALE, nil + return TERNFS_BLOCK_SERVICE_STALE, nil case "NO_READ": - return EGGSFS_BLOCK_SERVICE_NO_READ, nil + return TERNFS_BLOCK_SERVICE_NO_READ, nil case "NO_WRITE": - return EGGSFS_BLOCK_SERVICE_NO_WRITE, nil + return TERNFS_BLOCK_SERVICE_NO_WRITE, nil case "DECOMMISSIONED": - return EGGSFS_BLOCK_SERVICE_DECOMMISSIONED, nil + return TERNFS_BLOCK_SERVICE_DECOMMISSIONED, nil default: panic(fmt.Errorf("unknown blockservice flag %q", n)) } @@ -143,16 +143,16 @@ func (flags BlockServiceFlags) String() string { return "0" } var ret []string - if flags&EGGSFS_BLOCK_SERVICE_STALE != 0 { + if flags&TERNFS_BLOCK_SERVICE_STALE != 0 { ret = append(ret, "STALE") } - if flags&EGGSFS_BLOCK_SERVICE_NO_READ != 0 { + if flags&TERNFS_BLOCK_SERVICE_NO_READ != 0 { ret = append(ret, "NO_READ") } - if flags&EGGSFS_BLOCK_SERVICE_NO_WRITE != 0 { + if flags&TERNFS_BLOCK_SERVICE_NO_WRITE != 0 { ret = append(ret, "NO_WRITE") } - if flags&EGGSFS_BLOCK_SERVICE_DECOMMISSIONED != 0 { + if flags&TERNFS_BLOCK_SERVICE_DECOMMISSIONED != 0 { ret = append(ret, "DECOMMISSIONED") } return strings.Join(ret, "|") @@ -163,27 +163,27 @@ func (flags BlockServiceFlags) ShortString() string { return "0" } var ret []string - if flags&EGGSFS_BLOCK_SERVICE_STALE != 0 { + if flags&TERNFS_BLOCK_SERVICE_STALE != 0 { ret = append(ret, "S") } - if flags&EGGSFS_BLOCK_SERVICE_NO_READ != 0 { + if flags&TERNFS_BLOCK_SERVICE_NO_READ != 0 { ret = append(ret, "NR") } - if flags&EGGSFS_BLOCK_SERVICE_NO_WRITE != 0 { + if flags&TERNFS_BLOCK_SERVICE_NO_WRITE != 0 { ret = append(ret, "NW") } - if flags&EGGSFS_BLOCK_SERVICE_DECOMMISSIONED != 0 { + if flags&TERNFS_BLOCK_SERVICE_DECOMMISSIONED != 0 { ret = append(ret, "D") } return strings.Join(ret, "|") } func (flags BlockServiceFlags) CanRead() bool { - return (flags & (EGGSFS_BLOCK_SERVICE_STALE | EGGSFS_BLOCK_SERVICE_NO_READ | EGGSFS_BLOCK_SERVICE_DECOMMISSIONED)) == 0 + return (flags & (TERNFS_BLOCK_SERVICE_STALE | TERNFS_BLOCK_SERVICE_NO_READ | TERNFS_BLOCK_SERVICE_DECOMMISSIONED)) == 0 } func (flags BlockServiceFlags) CanWrite() bool { - return (flags & (EGGSFS_BLOCK_SERVICE_STALE | EGGSFS_BLOCK_SERVICE_NO_WRITE | EGGSFS_BLOCK_SERVICE_DECOMMISSIONED)) == 0 + return (flags & (TERNFS_BLOCK_SERVICE_STALE | TERNFS_BLOCK_SERVICE_NO_WRITE | TERNFS_BLOCK_SERVICE_DECOMMISSIONED)) == 0 } const ( @@ -481,27 +481,27 @@ const ( ROOT_DIR_INODE_ID = InodeId(DIRECTORY) << 61 ) -func MakeEggsTime(t time.Time) EggsTime { - return EggsTime(uint64(t.UnixNano())) +func MakeTernTime(t time.Time) TernTime { + return TernTime(uint64(t.UnixNano())) } -func Now() EggsTime { - return MakeEggsTime(time.Now()) +func Now() TernTime { + return MakeTernTime(time.Now()) } -func (t EggsTime) Time() time.Time { +func (t TernTime) Time() time.Time { return time.Unix(0, int64(uint64(t))) } -func (t EggsTime) String() string { +func (t TernTime) String() string { return t.Time().Format(time.RFC3339Nano) } -func (t EggsTime) MarshalJSON() ([]byte, error) { +func (t TernTime) MarshalJSON() ([]byte, error) { return []byte(fmt.Sprintf("%q", t.String())), nil } -func (t *EggsTime) UnmarshalJSON(b []byte) error { +func (t *TernTime) UnmarshalJSON(b []byte) error { var ts string if err := json.Unmarshal(b, &ts); err != nil { return err @@ -510,7 +510,7 @@ func (t *EggsTime) UnmarshalJSON(b []byte) error { if err != nil { return err } - *t = EggsTime(tt.UnixNano()) + *t = TernTime(tt.UnixNano()) return nil } @@ -547,7 +547,7 @@ func (i *IpPort) UnmarshalJSON(b []byte) error { return nil } -type EggsError uint16 +type TernError uint16 type ShardMessageKind uint8 @@ -630,7 +630,7 @@ type LookupReq struct { type LookupResp struct { TargetId InodeId - CreationTime EggsTime + CreationTime TernTime } // Does not consider transient files. Might return snapshot files: @@ -641,8 +641,8 @@ type StatFileReq struct { } type StatFileResp struct { - Mtime EggsTime - Atime EggsTime + Mtime TernTime + Atime TernTime Size uint64 } @@ -651,7 +651,7 @@ type StatTransientFileReq struct { } type StatTransientFileResp struct { - Mtime EggsTime + Mtime TernTime Size uint64 Note string } @@ -679,7 +679,7 @@ type DirectoryInfo struct { } type StatDirectoryResp struct { - Mtime EggsTime + Mtime TernTime Owner InodeId // if NULL_INODE_ID, the directory is currently snapshot Info DirectoryInfo } @@ -698,7 +698,7 @@ type CurrentEdge struct { TargetId InodeId NameHash NameHash Name string - CreationTime EggsTime + CreationTime TernTime } // Names with the same hash will never straddle two `ReadDirResp`s, assuming the @@ -909,7 +909,7 @@ type LinkFileReq struct { } type LinkFileResp struct { - CreationTime EggsTime + CreationTime TernTime } // turns a current outgoing edge into a snapshot owning edge. @@ -919,11 +919,11 @@ type SoftUnlinkFileReq struct { Name string // See comment in `SameDirectoryRenameReq` for an idication of why // we have this here even if it's not strictly needed. - CreationTime EggsTime + CreationTime TernTime } type SoftUnlinkFileResp struct { - DeleteCreationTime EggsTime // the creation time of the newly created delete edge + DeleteCreationTime TernTime // the creation time of the newly created delete edge } // Starts from the first span with byte offset <= than the provided @@ -1059,12 +1059,12 @@ type SameDirectoryRenameReq struct { // don't strictly needed because current edges are uniquely // identified by name) so that the shard can implement heuristics // to let likely repeated calls through in the name of idempotency. - OldCreationTime EggsTime + OldCreationTime TernTime NewName string } type SameDirectoryRenameResp struct { - NewCreationTime EggsTime + NewCreationTime TernTime } // This is exactly like `SameDirectoryRenameReq`, but it expects @@ -1074,12 +1074,12 @@ type SameDirectoryRenameSnapshotReq struct { TargetId InodeId DirId InodeId OldName string - OldCreationTime EggsTime + OldCreationTime TernTime NewName string } type SameDirectoryRenameSnapshotResp struct { - NewCreationTime EggsTime + NewCreationTime TernTime } type VisitDirectoriesReq struct { @@ -1115,7 +1115,7 @@ type VisitTransientFilesReq struct { type TransientFile struct { Id InodeId Cookie Cookie - DeadlineTime EggsTime + DeadlineTime TernTime } // Shall this be unsafe/private? We can freely get the cookie. @@ -1147,7 +1147,7 @@ type FullReadDirReq struct { DirId InodeId Flags FullReadDirFlags StartName string - StartTime EggsTime + StartTime TernTime Limit uint16 Mtu uint16 } @@ -1163,7 +1163,7 @@ type Edge struct { TargetId InodeIdExtra NameHash NameHash Name string - CreationTime EggsTime + CreationTime TernTime } func (e *Edge) Owned() bool { @@ -1175,7 +1175,7 @@ type FullReadDirCursor struct { // remember in which section we are. Current bool StartName string - StartTime EggsTime + StartTime TernTime } type FullReadDirResp struct { @@ -1194,7 +1194,7 @@ type CreateDirectoryInodeReq struct { } type CreateDirectoryInodeResp struct { - Mtime EggsTime + Mtime TernTime } // This is needed to move directories -- but it can break the invariants @@ -1257,17 +1257,17 @@ type CreateLockedCurrentEdgeReq struct { DirId InodeId Name string TargetId InodeId - OldCreationTime EggsTime + OldCreationTime TernTime } type CreateLockedCurrentEdgeResp struct { - CreationTime EggsTime + CreationTime TernTime } type LockCurrentEdgeReq struct { DirId InodeId TargetId InodeId - CreationTime EggsTime + CreationTime TernTime Name string } @@ -1277,7 +1277,7 @@ type LockCurrentEdgeResp struct{} type UnlockCurrentEdgeReq struct { DirId InodeId Name string - CreationTime EggsTime + CreationTime TernTime TargetId InodeId // Turn the current edge into a snapshot edge, and create a deletion // edge with the same name. @@ -1292,7 +1292,7 @@ type RemoveNonOwnedEdgeReq struct { DirId InodeId TargetId InodeId Name string - CreationTime EggsTime + CreationTime TernTime } type RemoveNonOwnedEdgeResp struct{} @@ -1303,7 +1303,7 @@ type SameShardHardFileUnlinkReq struct { OwnerId InodeId TargetId InodeId Name string - CreationTime EggsTime + CreationTime TernTime } type SameShardHardFileUnlinkResp struct{} @@ -1314,7 +1314,7 @@ type RemoveOwnedSnapshotFileEdgeReq struct { OwnerId InodeId TargetId InodeId Name string - CreationTime EggsTime + CreationTime TernTime } type RemoveOwnedSnapshotFileEdgeResp struct{} @@ -1445,26 +1445,26 @@ type MakeDirectoryReq struct { type MakeDirectoryResp struct { Id InodeId - CreationTime EggsTime + CreationTime TernTime } type RenameFileReq struct { TargetId InodeId OldOwnerId InodeId OldName string - OldCreationTime EggsTime + OldCreationTime TernTime NewOwnerId InodeId NewName string } type RenameFileResp struct { - CreationTime EggsTime + CreationTime TernTime } type SoftUnlinkDirectoryReq struct { OwnerId InodeId TargetId InodeId - CreationTime EggsTime + CreationTime TernTime Name string } @@ -1474,13 +1474,13 @@ type RenameDirectoryReq struct { TargetId InodeId OldOwnerId InodeId OldName string - OldCreationTime EggsTime + OldCreationTime TernTime NewOwnerId InodeId NewName string } type RenameDirectoryResp struct { - CreationTime EggsTime + CreationTime TernTime } // This operation is safe for files: we can check that it has no spans, @@ -1505,7 +1505,7 @@ type CrossShardHardUnlinkFileReq struct { OwnerId InodeId TargetId InodeId Name string - CreationTime EggsTime + CreationTime TernTime } type CrossShardHardUnlinkFileResp struct{} @@ -1753,7 +1753,7 @@ func (p *StripePolicy) Stripes(size uint32) uint8 { type ConstructFileEntry struct { Type InodeType - DeadlineTime EggsTime + DeadlineTime TernTime Note string } @@ -1767,7 +1767,7 @@ type SameDirectoryRenameEntry struct { DirId InodeId TargetId InodeId OldName string - OldCreationTime EggsTime + OldCreationTime TernTime NewName string } @@ -1775,7 +1775,7 @@ type SameDirectoryRenameSnapshotEntry struct { DirId InodeId TargetId InodeId OldName string - OldCreationTime EggsTime + OldCreationTime TernTime NewName string } @@ -1783,7 +1783,7 @@ type SoftUnlinkFileEntry struct { OwnerId InodeId FileId InodeId Name string - CreationTime EggsTime + CreationTime TernTime } type CreateDirectoryInodeEntry struct { @@ -1796,7 +1796,7 @@ type CreateLockedCurrentEdgeEntry struct { DirId InodeId Name string TargetId InodeId - OldCreationTime EggsTime + OldCreationTime TernTime } type UnlockCurrentEdgeEntry struct { @@ -1806,7 +1806,7 @@ type UnlockCurrentEdgeEntry struct { // locking mechanism + CDC synchronization anyway, which offer stronger guarantees // which means we never need heuristics for this. But we include it for consistency // and to better detect bugs. - CreationTime EggsTime + CreationTime TernTime TargetId InodeId WasMoved bool } @@ -1814,7 +1814,7 @@ type UnlockCurrentEdgeEntry struct { type LockCurrentEdgeEntry struct { DirId InodeId Name string - CreationTime EggsTime + CreationTime TernTime TargetId InodeId } @@ -1841,22 +1841,22 @@ type RemoveNonOwnedEdgeEntry struct { DirId InodeId TargetId InodeId Name string - CreationTime EggsTime + CreationTime TernTime } type SameShardHardFileUnlinkDEPRECATEDEntry struct { OwnerId InodeId TargetId InodeId Name string - CreationTime EggsTime + CreationTime TernTime } type SameShardHardFileUnlinkEntry struct { OwnerId InodeId TargetId InodeId Name string - CreationTime EggsTime - DeadlineTime EggsTime // Deadline for transient file + CreationTime TernTime + DeadlineTime TernTime // Deadline for transient file } type RemoveSpanInitiateEntry struct { @@ -1883,9 +1883,9 @@ type BlockServiceDeprecatedInfo struct { AvailableBytes uint64 Blocks uint64 // how many blocks we have Path string - LastSeen EggsTime + LastSeen TernTime HasFiles bool - FlagsLastChanged EggsTime + FlagsLastChanged TernTime } type EntryNewBlockInfo struct { @@ -1935,13 +1935,13 @@ type MakeFileTransientDEPRECATEDEntry struct { type MakeFileTransientEntry struct { Id InodeId - DeadlineTime EggsTime + DeadlineTime TernTime Note string } type ScrapTransientFileEntry struct { Id InodeId - DeadlineTime EggsTime + DeadlineTime TernTime } type RemoveSpanCertifyEntry struct { @@ -1954,7 +1954,7 @@ type RemoveOwnedSnapshotFileEdgeEntry struct { OwnerId InodeId TargetId InodeId Name string - CreationTime EggsTime + CreationTime TernTime } type SwapBlocksEntry struct { @@ -2060,21 +2060,21 @@ type AllBlockServicesDeprecatedResp struct { } type LocalChangedBlockServicesReq struct { - ChangedSince EggsTime + ChangedSince TernTime } type LocalChangedBlockServicesResp struct { - LastChange EggsTime + LastChange TernTime BlockServices []BlockService } type ChangedBlockServicesAtLocationReq struct { LocationId Location - ChangedSince EggsTime + ChangedSince TernTime } type ChangedBlockServicesAtLocationResp struct { - LastChange EggsTime + LastChange TernTime BlockServices []BlockService } @@ -2130,7 +2130,7 @@ type LocalShardsReq struct{} type ShardInfo struct { Addrs AddrsInfo - LastSeen EggsTime + LastSeen TernTime } type LocalShardsResp struct { @@ -2171,7 +2171,7 @@ type FullShardInfo struct { Id ShardReplicaId IsLeader bool Addrs AddrsInfo - LastSeen EggsTime + LastSeen TernTime LocationId Location } @@ -2206,7 +2206,7 @@ type LocalCdcReq struct{} type LocalCdcResp struct { Addrs AddrsInfo - LastSeen EggsTime + LastSeen TernTime } type CdcAtLocationReq struct { @@ -2215,7 +2215,7 @@ type CdcAtLocationReq struct { type CdcAtLocationResp struct { Addrs AddrsInfo - LastSeen EggsTime + LastSeen TernTime } type AllCdcReq struct{} @@ -2225,7 +2225,7 @@ type CdcInfo struct { LocationId Location IsLeader bool Addrs AddrsInfo - LastSeen EggsTime + LastSeen TernTime } type AllCdcResp struct { @@ -2382,7 +2382,7 @@ type LogWriteReq struct { } type LogWriteResp struct { - Result EggsError + Result TernError } type ReleaseReq struct { @@ -2391,7 +2391,7 @@ type ReleaseReq struct { } type ReleaseResp struct { - Result EggsError + Result TernError } type LogReadReq struct { @@ -2399,7 +2399,7 @@ type LogReadReq struct { } type LogReadResp struct { - Result EggsError + Result TernError Value bincode.Blob } @@ -2408,7 +2408,7 @@ type NewLeaderReq struct { } type NewLeaderResp struct { - Result EggsError + Result TernError LastReleased LogIdx } @@ -2418,7 +2418,7 @@ type NewLeaderConfirmReq struct { } type NewLeaderConfirmResp struct { - Result EggsError + Result TernError } type LogRecoveryReadReq struct { @@ -2427,7 +2427,7 @@ type LogRecoveryReadReq struct { } type LogRecoveryReadResp struct { - Result EggsError + Result TernError Value bincode.Blob } @@ -2438,5 +2438,5 @@ type LogRecoveryWriteReq struct { } type LogRecoveryWriteResp struct { - Result EggsError + Result TernError } diff --git a/go/msgs/msgs_bincode.go b/go/msgs/msgs_bincode.go index 42668de9..0777d000 100644 --- a/go/msgs/msgs_bincode.go +++ b/go/msgs/msgs_bincode.go @@ -5,7 +5,7 @@ package msgs import ( "fmt" "io" - "xtx/eggsfs/bincode" + "xtx/ternfs/bincode" ) // This file specifies @@ -54,20 +54,20 @@ func TagToDirInfoEntry(tag DirectoryInfoTag) IsDirectoryInfoEntry { } } -func (err EggsError) Error() string { +func (err TernError) Error() string { return err.String() } -func (err *EggsError) Pack(w io.Writer) error { +func (err *TernError) Pack(w io.Writer) error { return bincode.PackScalar(w, uint16(*err)) } -func (errCode *EggsError) Unpack(r io.Reader) error { +func (errCode *TernError) Unpack(r io.Reader) error { var c uint16 if err := bincode.UnpackScalar(r, &c); err != nil { return err } - *errCode = EggsError(c) + *errCode = TernError(c) return nil } @@ -157,101 +157,101 @@ func (fs *FetchedFullSpan) Unpack(r io.Reader) error { return nil } const ( - INTERNAL_ERROR EggsError = 10 - FATAL_ERROR EggsError = 11 - TIMEOUT EggsError = 12 - MALFORMED_REQUEST EggsError = 13 - MALFORMED_RESPONSE EggsError = 14 - NOT_AUTHORISED EggsError = 15 - UNRECOGNIZED_REQUEST EggsError = 16 - FILE_NOT_FOUND EggsError = 17 - DIRECTORY_NOT_FOUND EggsError = 18 - NAME_NOT_FOUND EggsError = 19 - EDGE_NOT_FOUND EggsError = 20 - EDGE_IS_LOCKED EggsError = 21 - TYPE_IS_DIRECTORY EggsError = 22 - TYPE_IS_NOT_DIRECTORY EggsError = 23 - BAD_COOKIE EggsError = 24 - INCONSISTENT_STORAGE_CLASS_PARITY EggsError = 25 - LAST_SPAN_STATE_NOT_CLEAN EggsError = 26 - COULD_NOT_PICK_BLOCK_SERVICES EggsError = 27 - BAD_SPAN_BODY EggsError = 28 - SPAN_NOT_FOUND EggsError = 29 - BLOCK_SERVICE_NOT_FOUND EggsError = 30 - CANNOT_CERTIFY_BLOCKLESS_SPAN EggsError = 31 - BAD_NUMBER_OF_BLOCKS_PROOFS EggsError = 32 - BAD_BLOCK_PROOF EggsError = 33 - CANNOT_OVERRIDE_NAME EggsError = 34 - NAME_IS_LOCKED EggsError = 35 - MTIME_IS_TOO_RECENT EggsError = 36 - MISMATCHING_TARGET EggsError = 37 - MISMATCHING_OWNER EggsError = 38 - MISMATCHING_CREATION_TIME EggsError = 39 - DIRECTORY_NOT_EMPTY EggsError = 40 - FILE_IS_TRANSIENT EggsError = 41 - OLD_DIRECTORY_NOT_FOUND EggsError = 42 - NEW_DIRECTORY_NOT_FOUND EggsError = 43 - LOOP_IN_DIRECTORY_RENAME EggsError = 44 - DIRECTORY_HAS_OWNER EggsError = 45 - FILE_IS_NOT_TRANSIENT EggsError = 46 - FILE_NOT_EMPTY EggsError = 47 - CANNOT_REMOVE_ROOT_DIRECTORY EggsError = 48 - FILE_EMPTY EggsError = 49 - CANNOT_REMOVE_DIRTY_SPAN EggsError = 50 - BAD_SHARD EggsError = 51 - BAD_NAME EggsError = 52 - MORE_RECENT_SNAPSHOT_EDGE EggsError = 53 - MORE_RECENT_CURRENT_EDGE EggsError = 54 - BAD_DIRECTORY_INFO EggsError = 55 - DEADLINE_NOT_PASSED EggsError = 56 - SAME_SOURCE_AND_DESTINATION EggsError = 57 - SAME_DIRECTORIES EggsError = 58 - SAME_SHARD EggsError = 59 - BAD_PROTOCOL_VERSION EggsError = 60 - BAD_CERTIFICATE EggsError = 61 - BLOCK_TOO_RECENT_FOR_DELETION EggsError = 62 - BLOCK_FETCH_OUT_OF_BOUNDS EggsError = 63 - BAD_BLOCK_CRC EggsError = 64 - BLOCK_TOO_BIG EggsError = 65 - BLOCK_NOT_FOUND EggsError = 66 - CANNOT_UNSET_DECOMMISSIONED EggsError = 67 - CANNOT_REGISTER_DECOMMISSIONED_OR_STALE EggsError = 68 - BLOCK_TOO_OLD_FOR_WRITE EggsError = 69 - BLOCK_IO_ERROR_DEVICE EggsError = 70 - BLOCK_IO_ERROR_FILE EggsError = 71 - INVALID_REPLICA EggsError = 72 - DIFFERENT_ADDRS_INFO EggsError = 73 - LEADER_PREEMPTED EggsError = 74 - LOG_ENTRY_MISSING EggsError = 75 - LOG_ENTRY_TRIMMED EggsError = 76 - LOG_ENTRY_UNRELEASED EggsError = 77 - LOG_ENTRY_RELEASED EggsError = 78 - AUTO_DECOMMISSION_FORBIDDEN EggsError = 79 - INCONSISTENT_BLOCK_SERVICE_REGISTRATION EggsError = 80 - SWAP_BLOCKS_INLINE_STORAGE EggsError = 81 - SWAP_BLOCKS_MISMATCHING_SIZE EggsError = 82 - SWAP_BLOCKS_MISMATCHING_STATE EggsError = 83 - SWAP_BLOCKS_MISMATCHING_CRC EggsError = 84 - SWAP_BLOCKS_DUPLICATE_BLOCK_SERVICE EggsError = 85 - SWAP_SPANS_INLINE_STORAGE EggsError = 86 - SWAP_SPANS_MISMATCHING_SIZE EggsError = 87 - SWAP_SPANS_NOT_CLEAN EggsError = 88 - SWAP_SPANS_MISMATCHING_CRC EggsError = 89 - SWAP_SPANS_MISMATCHING_BLOCKS EggsError = 90 - EDGE_NOT_OWNED EggsError = 91 - CANNOT_CREATE_DB_SNAPSHOT EggsError = 92 - BLOCK_SIZE_NOT_MULTIPLE_OF_PAGE_SIZE EggsError = 93 - SWAP_BLOCKS_DUPLICATE_FAILURE_DOMAIN EggsError = 94 - TRANSIENT_LOCATION_COUNT EggsError = 95 - ADD_SPAN_LOCATION_INLINE_STORAGE EggsError = 96 - ADD_SPAN_LOCATION_MISMATCHING_SIZE EggsError = 97 - ADD_SPAN_LOCATION_NOT_CLEAN EggsError = 98 - ADD_SPAN_LOCATION_MISMATCHING_CRC EggsError = 99 - ADD_SPAN_LOCATION_EXISTS EggsError = 100 - SWAP_BLOCKS_MISMATCHING_LOCATION EggsError = 101 + INTERNAL_ERROR TernError = 10 + FATAL_ERROR TernError = 11 + TIMEOUT TernError = 12 + MALFORMED_REQUEST TernError = 13 + MALFORMED_RESPONSE TernError = 14 + NOT_AUTHORISED TernError = 15 + UNRECOGNIZED_REQUEST TernError = 16 + FILE_NOT_FOUND TernError = 17 + DIRECTORY_NOT_FOUND TernError = 18 + NAME_NOT_FOUND TernError = 19 + EDGE_NOT_FOUND TernError = 20 + EDGE_IS_LOCKED TernError = 21 + TYPE_IS_DIRECTORY TernError = 22 + TYPE_IS_NOT_DIRECTORY TernError = 23 + BAD_COOKIE TernError = 24 + INCONSISTENT_STORAGE_CLASS_PARITY TernError = 25 + LAST_SPAN_STATE_NOT_CLEAN TernError = 26 + COULD_NOT_PICK_BLOCK_SERVICES TernError = 27 + BAD_SPAN_BODY TernError = 28 + SPAN_NOT_FOUND TernError = 29 + BLOCK_SERVICE_NOT_FOUND TernError = 30 + CANNOT_CERTIFY_BLOCKLESS_SPAN TernError = 31 + BAD_NUMBER_OF_BLOCKS_PROOFS TernError = 32 + BAD_BLOCK_PROOF TernError = 33 + CANNOT_OVERRIDE_NAME TernError = 34 + NAME_IS_LOCKED TernError = 35 + MTIME_IS_TOO_RECENT TernError = 36 + MISMATCHING_TARGET TernError = 37 + MISMATCHING_OWNER TernError = 38 + MISMATCHING_CREATION_TIME TernError = 39 + DIRECTORY_NOT_EMPTY TernError = 40 + FILE_IS_TRANSIENT TernError = 41 + OLD_DIRECTORY_NOT_FOUND TernError = 42 + NEW_DIRECTORY_NOT_FOUND TernError = 43 + LOOP_IN_DIRECTORY_RENAME TernError = 44 + DIRECTORY_HAS_OWNER TernError = 45 + FILE_IS_NOT_TRANSIENT TernError = 46 + FILE_NOT_EMPTY TernError = 47 + CANNOT_REMOVE_ROOT_DIRECTORY TernError = 48 + FILE_EMPTY TernError = 49 + CANNOT_REMOVE_DIRTY_SPAN TernError = 50 + BAD_SHARD TernError = 51 + BAD_NAME TernError = 52 + MORE_RECENT_SNAPSHOT_EDGE TernError = 53 + MORE_RECENT_CURRENT_EDGE TernError = 54 + BAD_DIRECTORY_INFO TernError = 55 + DEADLINE_NOT_PASSED TernError = 56 + SAME_SOURCE_AND_DESTINATION TernError = 57 + SAME_DIRECTORIES TernError = 58 + SAME_SHARD TernError = 59 + BAD_PROTOCOL_VERSION TernError = 60 + BAD_CERTIFICATE TernError = 61 + BLOCK_TOO_RECENT_FOR_DELETION TernError = 62 + BLOCK_FETCH_OUT_OF_BOUNDS TernError = 63 + BAD_BLOCK_CRC TernError = 64 + BLOCK_TOO_BIG TernError = 65 + BLOCK_NOT_FOUND TernError = 66 + CANNOT_UNSET_DECOMMISSIONED TernError = 67 + CANNOT_REGISTER_DECOMMISSIONED_OR_STALE TernError = 68 + BLOCK_TOO_OLD_FOR_WRITE TernError = 69 + BLOCK_IO_ERROR_DEVICE TernError = 70 + BLOCK_IO_ERROR_FILE TernError = 71 + INVALID_REPLICA TernError = 72 + DIFFERENT_ADDRS_INFO TernError = 73 + LEADER_PREEMPTED TernError = 74 + LOG_ENTRY_MISSING TernError = 75 + LOG_ENTRY_TRIMMED TernError = 76 + LOG_ENTRY_UNRELEASED TernError = 77 + LOG_ENTRY_RELEASED TernError = 78 + AUTO_DECOMMISSION_FORBIDDEN TernError = 79 + INCONSISTENT_BLOCK_SERVICE_REGISTRATION TernError = 80 + SWAP_BLOCKS_INLINE_STORAGE TernError = 81 + SWAP_BLOCKS_MISMATCHING_SIZE TernError = 82 + SWAP_BLOCKS_MISMATCHING_STATE TernError = 83 + SWAP_BLOCKS_MISMATCHING_CRC TernError = 84 + SWAP_BLOCKS_DUPLICATE_BLOCK_SERVICE TernError = 85 + SWAP_SPANS_INLINE_STORAGE TernError = 86 + SWAP_SPANS_MISMATCHING_SIZE TernError = 87 + SWAP_SPANS_NOT_CLEAN TernError = 88 + SWAP_SPANS_MISMATCHING_CRC TernError = 89 + SWAP_SPANS_MISMATCHING_BLOCKS TernError = 90 + EDGE_NOT_OWNED TernError = 91 + CANNOT_CREATE_DB_SNAPSHOT TernError = 92 + BLOCK_SIZE_NOT_MULTIPLE_OF_PAGE_SIZE TernError = 93 + SWAP_BLOCKS_DUPLICATE_FAILURE_DOMAIN TernError = 94 + TRANSIENT_LOCATION_COUNT TernError = 95 + ADD_SPAN_LOCATION_INLINE_STORAGE TernError = 96 + ADD_SPAN_LOCATION_MISMATCHING_SIZE TernError = 97 + ADD_SPAN_LOCATION_NOT_CLEAN TernError = 98 + ADD_SPAN_LOCATION_MISMATCHING_CRC TernError = 99 + ADD_SPAN_LOCATION_EXISTS TernError = 100 + SWAP_BLOCKS_MISMATCHING_LOCATION TernError = 101 ) -func (err EggsError) String() string { +func (err TernError) String() string { switch err { case 10: return "INTERNAL_ERROR" @@ -438,7 +438,7 @@ func (err EggsError) String() string { case 101: return "SWAP_BLOCKS_MISMATCHING_LOCATION" default: - return fmt.Sprintf("EggsError(%d)", err) + return fmt.Sprintf("TernError(%d)", err) } } diff --git a/go/rs/rs_test.go b/go/rs/rs_test.go index 1fc39712..e3c256de 100644 --- a/go/rs/rs_test.go +++ b/go/rs/rs_test.go @@ -4,7 +4,7 @@ import ( "math/rand" "sort" "testing" - "xtx/eggsfs/assert" + "xtx/ternfs/assert" ) func TestGet(t *testing.T) { diff --git a/go/s3/s3.go b/go/s3/s3.go index 417acbeb..276bf451 100644 --- a/go/s3/s3.go +++ b/go/s3/s3.go @@ -14,9 +14,9 @@ import ( "strings" "sync" "time" - "xtx/eggsfs/client" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/client" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" "golang.org/x/sync/errgroup" ) @@ -459,7 +459,7 @@ func (s *S3Server) handleGetObject(ctx context.Context, w http.ResponseWriter, r return &S3Error{Code: "NoSuchKey", Message: "The specified key does not exist.", StatusCode: http.StatusNotFound} } - var lastModified msgs.EggsTime + var lastModified msgs.TernTime var size uint64 var bodyReader io.ReadSeeker @@ -535,7 +535,7 @@ func (s *S3Server) handleGetObjectAttributes(ctx context.Context, w http.Respons inode := dentry.TargetId var size uint64 - var lastModified msgs.EggsTime + var lastModified msgs.TernTime if inode.Type() == msgs.DIRECTORY { statResp := &msgs.StatDirectoryResp{} diff --git a/go/eggsblocks/eggsblocks.go b/go/ternblocks/ternblocks.go similarity index 96% rename from go/eggsblocks/eggsblocks.go rename to go/ternblocks/ternblocks.go index aa8c5fd9..c9360608 100644 --- a/go/eggsblocks/eggsblocks.go +++ b/go/ternblocks/ternblocks.go @@ -27,12 +27,12 @@ import ( "syscall" "time" "unsafe" - "xtx/eggsfs/certificate" - "xtx/eggsfs/client" - "xtx/eggsfs/crc32c" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" - "xtx/eggsfs/wyhash" + "xtx/ternfs/certificate" + "xtx/ternfs/client" + "xtx/ternfs/crc32c" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" + "xtx/ternfs/wyhash" "golang.org/x/sys/unix" ) @@ -370,7 +370,7 @@ func writeBlocksResponse(log *lib.Logger, w io.Writer, resp msgs.BlocksResponse) return nil } -func writeBlocksResponseError(log *lib.Logger, w io.Writer, err msgs.EggsError) error { +func writeBlocksResponseError(log *lib.Logger, w io.Writer, err msgs.TernError) error { log.Debug("writing blocks error %v", err) buf := bytes.NewBuffer([]byte{}) if err := binary.Write(buf, binary.LittleEndian, msgs.BLOCKS_RESP_PROTOCOL_VERSION); err != nil { @@ -414,8 +414,8 @@ func (c *newToOldReadConverter) Read(p []byte) (int, error) { if toCopy > len(p) { toCopy = len(p) } - offSetInPage := c.totalRead % int(msgs.EGGS_PAGE_WITH_CRC_SIZE) - availableInPage := int(msgs.EGGS_PAGE_SIZE) - offSetInPage + offSetInPage := c.totalRead % int(msgs.TERN_PAGE_WITH_CRC_SIZE) + availableInPage := int(msgs.TERN_PAGE_SIZE) - offSetInPage if toCopy > availableInPage { toCopy = availableInPage } @@ -426,7 +426,7 @@ func (c *newToOldReadConverter) Read(p []byte) (int, error) { p = p[toCopy:] offsetInBuffer += toCopy offSetInPage += toCopy - if offSetInPage == int(msgs.EGGS_PAGE_SIZE) { + if offSetInPage == int(msgs.TERN_PAGE_SIZE) { if c.bytesInBuffer-offsetInBuffer < 4 { break } @@ -442,22 +442,22 @@ func (c *newToOldReadConverter) Read(p []byte) (int, error) { } func sendFetchBlock(log *lib.Logger, env *env, blockServiceId msgs.BlockServiceId, basePath string, blockId msgs.BlockId, offset uint32, count uint32, conn *net.TCPConn, withCrc bool, fileId msgs.InodeId) error { - if offset%msgs.EGGS_PAGE_SIZE != 0 { + if offset%msgs.TERN_PAGE_SIZE != 0 { log.RaiseAlert("trying to read from offset other than page boundary") return msgs.BLOCK_FETCH_OUT_OF_BOUNDS } - if count%msgs.EGGS_PAGE_SIZE != 0 { + if count%msgs.TERN_PAGE_SIZE != 0 { log.RaiseAlert("trying to read count which is not a multiple of page size") return msgs.BLOCK_FETCH_OUT_OF_BOUNDS } - pageCount := count / msgs.EGGS_PAGE_SIZE - offsetPageCount := offset / msgs.EGGS_PAGE_SIZE + pageCount := count / msgs.TERN_PAGE_SIZE + offsetPageCount := offset / msgs.TERN_PAGE_SIZE blockPath := path.Join(basePath, blockId.Path()) log.Debug("fetching block id %v at path %v", blockId, blockPath) f, err := os.Open(blockPath) if errors.Is(err, syscall.ENODATA) { - // see + // see raiseAlertAndHardwareEvent(log, env.failureDomain, blockServiceId.String(), fmt.Sprintf("could not open block %v, got ENODATA, this probably means that the block/disk is gone", blockPath)) // return io error, downstream code will pick it up @@ -482,19 +482,19 @@ func sendFetchBlock(log *lib.Logger, env *env, blockServiceId msgs.BlockServiceI return err } preReadSize := fi.Size() - filePageCount := uint32(fi.Size()) / msgs.EGGS_PAGE_WITH_CRC_SIZE + filePageCount := uint32(fi.Size()) / msgs.TERN_PAGE_WITH_CRC_SIZE if offsetPageCount+pageCount > filePageCount { - log.RaiseAlert("malformed request for block %v. requested read at [%d - %d] but stored block size is %d", blockId, offset, offset+count, filePageCount*msgs.EGGS_PAGE_SIZE) + log.RaiseAlert("malformed request for block %v. requested read at [%d - %d] but stored block size is %d", blockId, offset, offset+count, filePageCount*msgs.TERN_PAGE_SIZE) return msgs.BLOCK_FETCH_OUT_OF_BOUNDS } if !env.readWholeFile { - preReadSize = int64(pageCount) * int64(msgs.EGGS_PAGE_WITH_CRC_SIZE) + preReadSize = int64(pageCount) * int64(msgs.TERN_PAGE_WITH_CRC_SIZE) } var reader io.ReadSeeker = f if withCrc { - offset = offsetPageCount * msgs.EGGS_PAGE_WITH_CRC_SIZE - count = pageCount * msgs.EGGS_PAGE_WITH_CRC_SIZE + offset = offsetPageCount * msgs.TERN_PAGE_WITH_CRC_SIZE + count = pageCount * msgs.TERN_PAGE_WITH_CRC_SIZE unix.Fadvise(int(f.Fd()), int64(offset), preReadSize, unix.FADV_SEQUENTIAL | unix.FADV_WILLNEED) if _, err := reader.Seek(int64(offset), 0); err != nil { @@ -523,7 +523,7 @@ func sendFetchBlock(log *lib.Logger, env *env, blockServiceId msgs.BlockServiceI } } else { // the only remaining case is that we have a file in new format and client wants old format - offset = offsetPageCount * msgs.EGGS_PAGE_WITH_CRC_SIZE + offset = offsetPageCount * msgs.TERN_PAGE_WITH_CRC_SIZE unix.Fadvise(int(f.Fd()), int64(offset), preReadSize, unix.FADV_SEQUENTIAL | unix.FADV_WILLNEED) if _, err := reader.Seek(int64(offset), 0); err != nil { return err @@ -578,7 +578,7 @@ func checkBlock(log *lib.Logger, env *env, blockServiceId msgs.BlockServiceId, b f, err := os.Open(blockPath) if errors.Is(err, syscall.ENODATA) { - // see + // see raiseAlertAndHardwareEvent(log, env.failureDomain, blockServiceId.String(), fmt.Sprintf("could not open block %v, got ENODATA, this probably means that the block/disk is gone", blockPath)) // return io error, downstream code will pick it up @@ -600,11 +600,11 @@ func checkBlock(log *lib.Logger, env *env, blockServiceId msgs.BlockServiceId, b atomic.AddUint64(&s.blocksChecked, 1) atomic.AddUint64(&s.bytesChecked, uint64(expectedSize)) - if uint32(fi.Size())%msgs.EGGS_PAGE_WITH_CRC_SIZE != 0 { - log.ErrorNoAlert("size %v for block %v, not multiple of EGGS_PAGE_WITH_CRC_SIZE", uint32(fi.Size()), blockPath) + if uint32(fi.Size())%msgs.TERN_PAGE_WITH_CRC_SIZE != 0 { + log.ErrorNoAlert("size %v for block %v, not multiple of TERN_PAGE_WITH_CRC_SIZE", uint32(fi.Size()), blockPath) return msgs.BAD_BLOCK_CRC } - actualDataSize := (uint32(fi.Size()) / msgs.EGGS_PAGE_WITH_CRC_SIZE) * msgs.EGGS_PAGE_SIZE + actualDataSize := (uint32(fi.Size()) / msgs.TERN_PAGE_WITH_CRC_SIZE) * msgs.TERN_PAGE_SIZE if actualDataSize != expectedSize { log.ErrorNoAlert("size %v for block %v, not equal to expected size %v", actualDataSize, blockPath, expectedSize) return msgs.BAD_BLOCK_CRC @@ -614,7 +614,7 @@ func checkBlock(log *lib.Logger, env *env, blockServiceId msgs.BlockServiceId, b err = verifyCrcReader(log, bufPtr.Bytes(), f, crc) if errors.Is(err, syscall.ENODATA) { - // see + // see raiseAlertAndHardwareEvent(log, env.failureDomain, blockServiceId.String(), fmt.Sprintf("could not open block %v, got ENODATA, this probably means that the block/disk is gone", blockPath)) // return io error, downstream code will pick it up @@ -648,7 +648,7 @@ func writeToBuf(log *lib.Logger, env *env, reader io.Reader, size int64) (*lib.B readBuffer := readBufPtr.Bytes() var err error - writeButPtr := env.bufPool.Get(int(size / int64(msgs.EGGS_PAGE_SIZE) * int64(msgs.EGGS_PAGE_WITH_CRC_SIZE))) + writeButPtr := env.bufPool.Get(int(size / int64(msgs.TERN_PAGE_SIZE) * int64(msgs.TERN_PAGE_WITH_CRC_SIZE))) writeBuffer := writeButPtr.Bytes() defer func() { if err != nil { @@ -674,19 +674,19 @@ func writeToBuf(log *lib.Logger, env *env, reader io.Reader, size int64) (*lib.B } } dataInReadBuffer += read - if dataInReadBuffer < int(msgs.EGGS_PAGE_SIZE) { + if dataInReadBuffer < int(msgs.TERN_PAGE_SIZE) { continue } - availablePages := dataInReadBuffer / int(msgs.EGGS_PAGE_SIZE) + availablePages := dataInReadBuffer / int(msgs.TERN_PAGE_SIZE) for i := 0; i < availablePages; i++ { - page := readBuffer[i*int(msgs.EGGS_PAGE_SIZE) : (i+1)*int(msgs.EGGS_PAGE_SIZE)] + page := readBuffer[i*int(msgs.TERN_PAGE_SIZE) : (i+1)*int(msgs.TERN_PAGE_SIZE)] dataInWriteBuffer += copy(writeBuffer[dataInWriteBuffer:], page) pageCRC := crc32c.Sum(0, page) binary.LittleEndian.PutUint32(writeBuffer[dataInWriteBuffer:dataInWriteBuffer+4], pageCRC) dataInWriteBuffer += 4 } - size -= int64(availablePages) * int64(msgs.EGGS_PAGE_SIZE) - dataInReadBuffer = copy(readBuffer[:], readBuffer[availablePages*int(msgs.EGGS_PAGE_SIZE):dataInReadBuffer]) + size -= int64(availablePages) * int64(msgs.TERN_PAGE_SIZE) + dataInReadBuffer = copy(readBuffer[:], readBuffer[availablePages*int(msgs.TERN_PAGE_SIZE):dataInReadBuffer]) } if !readerHasMoreData && (size-int64(dataInReadBuffer) > 0) { log.Debug("failed converting block, reached EOF in input stream, missing %d bytes", size-int64(dataInReadBuffer)) @@ -839,7 +839,7 @@ func handleRequestError( } atomic.AddUint64(&blockService.ioErrors, 1) log.ErrorNoAlert("got unxpected IO error %v from %v for req kind %v, block service %v, will return %v, previous error: %v", err, conn.RemoteAddr(), req, blockServiceId, err, *lastError) - writeBlocksResponseError(log, conn, err.(msgs.EggsError)) + writeBlocksResponseError(log, conn, err.(msgs.TernError)) return false } @@ -850,8 +850,8 @@ func handleRequestError( // the cached span structure is used in the kmod. if _, isDead := deadBlockServices[blockServiceId]; isDead && (req == msgs.CHECK_BLOCK || req == msgs.FETCH_BLOCK || req == msgs.FETCH_BLOCK_WITH_CRC) { log.Info("got fetch/check block request for dead block service %v", blockServiceId) - if eggsErr, isEggsErr := err.(msgs.EggsError); isEggsErr { - if err := writeBlocksResponseError(log, conn, eggsErr); err != nil { + if ternErr, isTernErr := err.(msgs.TernError); isTernErr { + if err := writeBlocksResponseError(log, conn, ternErr); err != nil { log.Info("could not write response error to %v, will terminate connection: %v", conn.RemoteAddr(), err) return false } @@ -864,8 +864,8 @@ func handleRequestError( log.RaiseAlertStack("", 1, "got unexpected error %v from %v for req kind %v, block service %v, previous error %v", err, conn.RemoteAddr(), req, blockServiceId, *lastError) } - if eggsErr, isEggsErr := err.(msgs.EggsError); isEggsErr { - if err := writeBlocksResponseError(log, conn, eggsErr); err != nil { + if ternErr, isTernErr := err.(msgs.TernError); isTernErr { + if err := writeBlocksResponseError(log, conn, ternErr); err != nil { log.Info("could not write response error to %v, will terminate connection: %v", conn.RemoteAddr(), err) return false } @@ -873,7 +873,7 @@ func handleRequestError( // that the stream is safe. Right now I just added one case which I know // is safe, we can add others conservatively in the future if we wish to. safeError := false - safeError = safeError || ((req == msgs.CHECK_BLOCK || req == msgs.FETCH_BLOCK || req == msgs.FETCH_BLOCK_WITH_CRC) && eggsErr == msgs.BLOCK_NOT_FOUND) + safeError = safeError || ((req == msgs.CHECK_BLOCK || req == msgs.FETCH_BLOCK || req == msgs.FETCH_BLOCK_WITH_CRC) && ternErr == msgs.BLOCK_NOT_FOUND) if safeError { log.Info("preserving connection from %v after err %v", conn.RemoteAddr(), err) return true @@ -986,7 +986,7 @@ func handleSingleRequest( if err := checkEraseCertificate(log, blockServiceId, blockService.cipher, whichReq); err != nil { return handleRequestError(log, blockServices, deadBlockServices, conn, lastError, blockServiceId, kind, err) } - cutoffTime := msgs.EggsTime(uint64(whichReq.BlockId)).Time().Add(futureCutoff) + cutoffTime := msgs.TernTime(uint64(whichReq.BlockId)).Time().Add(futureCutoff) now := time.Now() if now.Before(cutoffTime) { log.ErrorNoAlert("block %v is too recent to be deleted (now=%v, cutoffTime=%v)", whichReq.BlockId, now, cutoffTime) @@ -1014,8 +1014,8 @@ func handleSingleRequest( return handleRequestError(log, blockServices, deadBlockServices, conn, lastError, blockServiceId, kind, err) } case *msgs.WriteBlockReq: - pastCutoffTime := msgs.EggsTime(uint64(whichReq.BlockId)).Time().Add(-PAST_CUTOFF) - futureCutoffTime := msgs.EggsTime(uint64(whichReq.BlockId)).Time().Add(WRITE_FUTURE_CUTOFF) + pastCutoffTime := msgs.TernTime(uint64(whichReq.BlockId)).Time().Add(-PAST_CUTOFF) + futureCutoffTime := msgs.TernTime(uint64(whichReq.BlockId)).Time().Add(WRITE_FUTURE_CUTOFF) now := time.Now() if now.Before(pastCutoffTime) { panic(fmt.Errorf("block %v is in the future! (now=%v, pastCutoffTime=%v)", whichReq.BlockId, now, pastCutoffTime)) @@ -1564,7 +1564,7 @@ func main() { sameFailureDomain = pathParts[0] == env.pathPrefix } } - isDecommissioned := (bs.Flags & msgs.EGGSFS_BLOCK_SERVICE_DECOMMISSIONED) != 0 + isDecommissioned := (bs.Flags & msgs.TERNFS_BLOCK_SERVICE_DECOMMISSIONED) != 0 // No disagreement on failure domain with shuckle (otherwise we could end up with // a split brain scenario where two eggsblocks processes assume control of two dead // block services) @@ -1771,7 +1771,7 @@ func verifyCrcReader(log *lib.Logger, readBuffer []byte, r io.Reader, expectedCr cursor := uint32(0) remainingData := 0 actualCrc := uint32(0) - processChunkSize := int(msgs.EGGS_PAGE_WITH_CRC_SIZE) + processChunkSize := int(msgs.TERN_PAGE_WITH_CRC_SIZE) if len(readBuffer) < processChunkSize { readBuffer = make([]byte, processChunkSize) } @@ -1789,18 +1789,18 @@ func verifyCrcReader(log *lib.Logger, readBuffer []byte, r io.Reader, expectedCr } remainingData += read cursor += uint32(read) - if remainingData < int(msgs.EGGS_PAGE_WITH_CRC_SIZE) { + if remainingData < int(msgs.TERN_PAGE_WITH_CRC_SIZE) { continue } numAvailableChunks := remainingData / processChunkSize for i := 0; i < numAvailableChunks; i++ { - actualPageCrc := crc32c.Sum(0, readBuffer[i*processChunkSize:i*processChunkSize+int(msgs.EGGS_PAGE_SIZE)]) - storedPageCrc := binary.LittleEndian.Uint32(readBuffer[i*processChunkSize+int(msgs.EGGS_PAGE_SIZE) : i*processChunkSize+int(msgs.EGGS_PAGE_WITH_CRC_SIZE)]) + actualPageCrc := crc32c.Sum(0, readBuffer[i*processChunkSize:i*processChunkSize+int(msgs.TERN_PAGE_SIZE)]) + storedPageCrc := binary.LittleEndian.Uint32(readBuffer[i*processChunkSize+int(msgs.TERN_PAGE_SIZE) : i*processChunkSize+int(msgs.TERN_PAGE_WITH_CRC_SIZE)]) if storedPageCrc != actualPageCrc { log.Debug("failed checking crc. incorrect page crc at offset %d, expected %v, got %v", cursor-uint32(remainingData)+uint32(i*processChunkSize), msgs.Crc(storedPageCrc), msgs.Crc(actualPageCrc)) return msgs.BAD_BLOCK_CRC } - actualCrc = crc32c.Append(actualCrc, actualPageCrc, int(msgs.EGGS_PAGE_SIZE)) + actualCrc = crc32c.Append(actualCrc, actualPageCrc, int(msgs.TERN_PAGE_SIZE)) } copy(readBuffer[:], readBuffer[numAvailableChunks*processChunkSize:remainingData]) remainingData -= numAvailableChunks * processChunkSize diff --git a/go/eggscli/filesamples/filesamples.go b/go/terncli/filesamples/filesamples.go similarity index 95% rename from go/eggscli/filesamples/filesamples.go rename to go/terncli/filesamples/filesamples.go index b167c4ed..c257c1a4 100644 --- a/go/eggscli/filesamples/filesamples.go +++ b/go/terncli/filesamples/filesamples.go @@ -7,9 +7,9 @@ import ( "path" "strconv" "sync" - "xtx/eggsfs/client" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/client" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" ) type PathResolver interface { @@ -24,7 +24,7 @@ type PathResolver interface { // Returns a thread-safe PathResolver that can be used concurrently in multiple goroutines. func NewPathResolver(cl *client.Client, logger *lib.Logger) PathResolver { return &resolver{ - eggsClient: cl, + ternClient: cl, logger: logger, inodeToDir: make(map[msgs.InodeId]string), lock: sync.RWMutex{}, @@ -32,7 +32,7 @@ func NewPathResolver(cl *client.Client, logger *lib.Logger) PathResolver { } type resolver struct { - eggsClient *client.Client + ternClient *client.Client logger *lib.Logger // Mapping of inode ID to directory name. Used to avoid duplicate lookups for the same inode. inodeToDir map[msgs.InodeId]string @@ -49,7 +49,7 @@ func (r *resolver) Resolve(ownerInode msgs.InodeId, filename string) (string, er Id: currentDir, } statResp := msgs.StatDirectoryResp{} - if err := r.eggsClient.ShardRequest(r.logger, currentDir.Shard(), &statReq, &statResp); err != nil { + if err := r.ternClient.ShardRequest(r.logger, currentDir.Shard(), &statReq, &statResp); err != nil { return "", fmt.Errorf("StatDirectoryReq to shard %v for inode %v failed: %w", currentDir.Shard(), currentDir, err) } owner := statResp.Owner @@ -149,7 +149,7 @@ func (r *resolver) getNameFromShard(parentDir msgs.InodeId, target msgs.InodeId) } for { readDirResp := msgs.FullReadDirResp{} - if err := r.eggsClient.ShardRequest(r.logger, parentDir.Shard(), &readDirReq, &readDirResp); err != nil { + if err := r.ternClient.ShardRequest(r.logger, parentDir.Shard(), &readDirReq, &readDirResp); err != nil { return "", fmt.Errorf("FullReadDirReq to shard failed: %w", err) } for _, result := range readDirResp.Results { diff --git a/go/eggscli/kernelmetrics.go b/go/terncli/kernelmetrics.go similarity index 100% rename from go/eggscli/kernelmetrics.go rename to go/terncli/kernelmetrics.go diff --git a/go/eggscli/eggscli.go b/go/terncli/terncli.go similarity index 98% rename from go/eggscli/eggscli.go rename to go/terncli/terncli.go index 464c1815..85cf2dfe 100644 --- a/go/eggscli/eggscli.go +++ b/go/terncli/terncli.go @@ -21,17 +21,15 @@ import ( "sync" "sync/atomic" "time" - "xtx/eggsfs/certificate" - "xtx/eggsfs/cleanup" - "xtx/eggsfs/client" - "xtx/eggsfs/crc32c" - "xtx/eggsfs/eggscli/filesamples" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/certificate" + "xtx/ternfs/cleanup" + "xtx/ternfs/client" + "xtx/ternfs/crc32c" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" + "xtx/ternfs/terncli/filesamples" ) -const PROD_SHUCKLE_ADDRESS = "REDACTED" - type commandSpec struct { flags *flag.FlagSet run func() @@ -67,7 +65,7 @@ func outputFullFileSizes(log *lib.Logger, c *client.Client) { WorkersPerShard: 1, }, "/", - func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.EggsTime, id msgs.InodeId, current bool, owned bool) error { + func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.TernTime, id msgs.InodeId, current bool, owned bool) error { if id.Type() == msgs.DIRECTORY { if atomic.AddUint64(&examinedDirs, 1)%1000000 == 0 { log.Info("examined %v dirs, %v files", examinedDirs, examinedFiles) @@ -195,7 +193,6 @@ func main() { cdcOverallTimeout := flag.Duration("cdc-overall-timeout", -1, "") verbose := flag.Bool("verbose", false, "") trace := flag.Bool("trace", false, "") - prod := flag.Bool("prod", false, "Use production shuckle endpoint.") var log *lib.Logger var mbClient *client.Client @@ -582,7 +579,7 @@ func main() { cpIntoCmd := flag.NewFlagSet("cp-into", flag.ExitOnError) cpIntoInput := cpIntoCmd.String("i", "", "What to copy, if empty stdin.") - cpIntoOut := cpIntoCmd.String("o", "", "Where to write the file to in Eggs") + cpIntoOut := cpIntoCmd.String("o", "", "Where to write the file to in TernFS") cpIntoRun := func() { path := filepath.Clean("/" + *cpIntoOut) var input io.Reader @@ -608,7 +605,7 @@ func main() { } cpOutofCmd := flag.NewFlagSet("cp-outof", flag.ExitOnError) - cpOutofInput := cpOutofCmd.String("i", "", "What to copy from eggs.") + cpOutofInput := cpOutofCmd.String("i", "", "What to copy from TernFS.") cpOutofId := cpOutofCmd.Uint64("id", 0, "The ID of the file to copy.") // cpOutofOut := cpOutofCmd.String("o", "", "Where to write the file to. Stdout if empty.") cpOutofRun := func() { @@ -996,7 +993,7 @@ func main() { Snapshot: *duSnapshot, }, *duDir, - func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.EggsTime, id msgs.InodeId, current bool, owned bool) error { + func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.TernTime, id msgs.InodeId, current bool, owned bool) error { if !owned { return nil } @@ -1157,7 +1154,7 @@ func main() { if *findName != "" { re = regexp.MustCompile(*findName) } - findBefore := msgs.EggsTime(^uint64(0)) + findBefore := msgs.TernTime(^uint64(0)) if *findBeforeSpec != "" { d, durErr := time.ParseDuration(*findBeforeSpec) if durErr != nil { @@ -1165,9 +1162,9 @@ func main() { if tErr != nil { panic(fmt.Errorf("could not parse %q as duration or time: %v, %v", *findBeforeSpec, durErr, tErr)) } - findBefore = msgs.MakeEggsTime(t) + findBefore = msgs.MakeTernTime(t) } else { - findBefore = msgs.MakeEggsTime(time.Now().Add(-d)) + findBefore = msgs.MakeTernTime(time.Now().Add(-d)) } } c := getClient() @@ -1179,7 +1176,7 @@ func main() { Snapshot: *findSnapshot, }, *findDir, - func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.EggsTime, id msgs.InodeId, current bool, owned bool) error { + func(parent msgs.InodeId, parentPath string, name string, creationTime msgs.TernTime, id msgs.InodeId, current bool, owned bool) error { if !owned && *findOnlyOwned { return nil } @@ -1420,13 +1417,13 @@ func main() { panic(err) } if id.Type() == msgs.DIRECTORY { - var startTime msgs.EggsTime + var startTime msgs.TernTime if *defragFileFrom != "" { t, err := time.Parse(time.RFC3339Nano, *defragFileFrom) if err != nil { panic(err) } - startTime = msgs.MakeEggsTime(t) + startTime = msgs.MakeTernTime(t) } options := cleanup.DefragOptions{ WorkersPerShard: 5, @@ -1584,10 +1581,6 @@ func main() { flag.Parse() - if *prod { - *shuckleAddress = PROD_SHUCKLE_ADDRESS - } - if *mtu != "" { if *mtu == "max" { client.SetMTU(msgs.MAX_UDP_MTU) diff --git a/go/eggsfuse/eggsfuse.go b/go/ternfuse/ternfuse.go similarity index 90% rename from go/eggsfuse/eggsfuse.go rename to go/ternfuse/ternfuse.go index f2d613e9..31f37f2a 100644 --- a/go/eggsfuse/eggsfuse.go +++ b/go/ternfuse/ternfuse.go @@ -13,9 +13,9 @@ import ( "sync" "syscall" "time" - "xtx/eggsfs/client" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" + "xtx/ternfs/client" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" "github.com/hanwen/go-fuse/v2/fs" "github.com/hanwen/go-fuse/v2/fuse" @@ -26,7 +26,7 @@ var logger *lib.Logger var dirInfoCache *client.DirInfoCache var bufPool *lib.BufPool -func eggsErrToErrno(err error) syscall.Errno { +func ternErrToErrno(err error) syscall.Errno { switch err { case msgs.INTERNAL_ERROR: return syscall.EIO @@ -112,7 +112,7 @@ func inodeTypeToMode(typ msgs.InodeType) uint32 { func shardRequest(shid msgs.ShardId, req msgs.ShardRequest, resp msgs.ShardResponse) syscall.Errno { if err := c.ShardRequest(logger, shid, req, resp); err != nil { - return eggsErrToErrno(err) + return ternErrToErrno(err) } return 0 @@ -120,9 +120,9 @@ func shardRequest(shid msgs.ShardId, req msgs.ShardRequest, resp msgs.ShardRespo func cdcRequest(req msgs.CDCRequest, resp msgs.CDCResponse) syscall.Errno { if err := c.CDCRequest(logger, req, resp); err != nil { - switch eggsErr := err.(type) { - case msgs.EggsError: - return eggsErrToErrno(eggsErr) + switch ternErr := err.(type) { + case msgs.TernError: + return ternErrToErrno(ternErr) } panic(err) } @@ -130,12 +130,12 @@ func cdcRequest(req msgs.CDCRequest, resp msgs.CDCResponse) syscall.Errno { return 0 } -type eggsNode struct { +type ternNode struct { fs.Inode id msgs.InodeId } -func (n *eggsNode)getattr(allowTransient bool, out *fuse.Attr) syscall.Errno { +func (n *ternNode)getattr(allowTransient bool, out *fuse.Attr) syscall.Errno { logger.Debug("getattr inode=%v", n.id) out.Ino = uint64(n.id) @@ -155,11 +155,11 @@ func (n *eggsNode)getattr(allowTransient bool, out *fuse.Attr) syscall.Errno { err := c.ShardRequest(logger, n.id.Shard(), &msgs.StatFileReq{Id: n.id}, &resp) // if we tolerate transient files, try that - if eggsErr, ok := err.(msgs.EggsError); ok && eggsErr == msgs.FILE_NOT_FOUND && allowTransient { + if ternErr, ok := err.(msgs.TernError); ok && ternErr == msgs.FILE_NOT_FOUND && allowTransient { resp := msgs.StatTransientFileResp{} if newErr := c.ShardRequest(logger, n.id.Shard(), &msgs.StatTransientFileReq{Id: n.id}, &resp); newErr != nil { logger.Debug("ignoring transient stat error %v", newErr) - return eggsErrToErrno(err) // use original error + return ternErrToErrno(err) // use original error } out.Size = resp.Size mtime := uint64(resp.Mtime) @@ -173,7 +173,7 @@ func (n *eggsNode)getattr(allowTransient bool, out *fuse.Attr) syscall.Errno { } if err != nil { - return eggsErrToErrno(err) + return ternErrToErrno(err) } out.Size = resp.Size mtime := uint64(resp.Mtime) @@ -190,11 +190,11 @@ func (n *eggsNode)getattr(allowTransient bool, out *fuse.Attr) syscall.Errno { return 0 } -func (n *eggsNode) Getattr(ctx context.Context, f fs.FileHandle, out *fuse.AttrOut) syscall.Errno { +func (n *ternNode) Getattr(ctx context.Context, f fs.FileHandle, out *fuse.AttrOut) syscall.Errno { return n.getattr(true, &out.Attr) } -func (n *eggsNode) Lookup( +func (n *ternNode) Lookup( ctx context.Context, name string, out *fuse.EntryOut, ) (*fs.Inode, syscall.Errno) { logger.Debug("lookup dir=%v, name=%v", n.id, name) @@ -213,7 +213,7 @@ func (n *eggsNode) Lookup( default: panic(fmt.Errorf("bad type %v", resp.TargetId.Type())) } - newNode := &eggsNode{id: resp.TargetId} + newNode := &ternNode{id: resp.TargetId} if err := newNode.getattr(false, &out.Attr); err != 0 { return nil, err } @@ -275,7 +275,7 @@ func (ds *dirStream) Next() (fuse.DirEntry, syscall.Errno) { func (ds *dirStream) Close() {} -func (n *eggsNode) Readdir(ctx context.Context) (fs.DirStream, syscall.Errno) { +func (n *ternNode) Readdir(ctx context.Context) (fs.DirStream, syscall.Errno) { logger.Debug("readdir dir=%v", n.id) ds := dirStream{dirId: n.id} if err := ds.refresh(); err != 0 { @@ -296,7 +296,7 @@ type transientFile struct { writeErr syscall.Errno } -func (n *eggsNode) createInternal(name string, flags uint32, mode uint32) (tf *transientFile, errno syscall.Errno) { +func (n *ternNode) createInternal(name string, flags uint32, mode uint32) (tf *transientFile, errno syscall.Errno) { req := msgs.ConstructFileReq{Note: name} resp := msgs.ConstructFileResp{} if (mode & syscall.S_IFMT) == syscall.S_IFREG { @@ -320,7 +320,7 @@ func (n *eggsNode) createInternal(name string, flags uint32, mode uint32) (tf *t return &transient, 0 } -func (n *eggsNode) Create( +func (n *ternNode) Create( ctx context.Context, name string, flags uint32, mode uint32, out *fuse.EntryOut, ) (node *fs.Inode, fh fs.FileHandle, fuseFlags uint32, errno syscall.Errno) { logger.Debug("create id=%v, name=%v, flags=0x%08x, mode=0x%08x", n.id, name, flags, mode) @@ -329,7 +329,7 @@ func (n *eggsNode) Create( if err != 0 { return nil, nil, 0, err } - fileNode := eggsNode{ + fileNode := ternNode{ id: tf.id, } @@ -338,7 +338,7 @@ func (n *eggsNode) Create( return n.NewInode(ctx, &fileNode, fs.StableAttr{Ino: uint64(tf.id), Mode: mode}), tf, 0, 0 } -func (n *eggsNode) Mkdir( +func (n *ternNode) Mkdir( ctx context.Context, name string, mode uint32, out *fuse.EntryOut, ) (*fs.Inode, syscall.Errno) { logger.Debug("mkdir dir=%v, name=%v, mode=0x%08x", n.id, name, mode) @@ -353,7 +353,7 @@ func (n *eggsNode) Mkdir( if err := cdcRequest(&req, &resp); err != 0 { return nil, err } - return n.NewInode(ctx, &eggsNode{id: resp.Id}, fs.StableAttr{Ino: uint64(resp.Id), Mode: syscall.S_IFDIR}), 0 + return n.NewInode(ctx, &ternNode{id: resp.Id}, fs.StableAttr{Ino: uint64(resp.Id), Mode: syscall.S_IFDIR}), 0 } func (f *transientFile) Write(ctx context.Context, data []byte, off int64) (written uint32, errno syscall.Errno) { @@ -410,7 +410,7 @@ func (f *transientFile) Flush(ctx context.Context) syscall.Errno { }() if err := c.WriteFile(logger, bufPool, dirInfoCache, f.dir, f.id, f.cookie, bytes.NewReader(f.data.Bytes())); err != nil { - f.writeErr = eggsErrToErrno(err) + f.writeErr = ternErrToErrno(err) return f.writeErr } @@ -428,7 +428,7 @@ func (f *transientFile) Flush(ctx context.Context) syscall.Errno { return 0 } -func (n *eggsNode) Setattr(ctx context.Context, f fs.FileHandle, in *fuse.SetAttrIn, out *fuse.AttrOut) syscall.Errno { +func (n *ternNode) Setattr(ctx context.Context, f fs.FileHandle, in *fuse.SetAttrIn, out *fuse.AttrOut) syscall.Errno { logger.Debug("setattr inode=%v, in=%+v", n.id, in) if n.id.Type() == msgs.DIRECTORY { @@ -462,14 +462,14 @@ func (n *eggsNode) Setattr(ctx context.Context, f fs.FileHandle, in *fuse.SetAtt return 0 } -func (n *eggsNode) Rename(ctx context.Context, oldName string, newParent0 fs.InodeEmbedder, newName string, flags uint32) syscall.Errno { +func (n *ternNode) Rename(ctx context.Context, oldName string, newParent0 fs.InodeEmbedder, newName string, flags uint32) syscall.Errno { oldParent := n.id - newParent := newParent0.(*eggsNode).id + newParent := newParent0.(*ternNode).id logger.Debug("rename dir=%v oldParent=%v, oldName=%v, newParent=%v, newName=%v", n, oldParent, oldName, newParent, newName) var targetId msgs.InodeId - var oldCreationTime msgs.EggsTime + var oldCreationTime msgs.TernTime { req := msgs.LookupReq{DirId: oldParent, Name: oldName} resp := msgs.LookupResp{} @@ -527,7 +527,7 @@ type openFile struct { } -func (n *eggsNode) Open(ctx context.Context, flags uint32) (fh fs.FileHandle, fuseFlags uint32, errno syscall.Errno) { +func (n *ternNode) Open(ctx context.Context, flags uint32) (fh fs.FileHandle, fuseFlags uint32, errno syscall.Errno) { logger.Debug("open file=%v flags=%08x", n.id, flags) of := openFile{} @@ -576,7 +576,7 @@ func (of *openFile) Read(ctx context.Context, dest []byte, off int64) (fuse.Read return fuse.ReadResultData(dest[:r]), 0 } -func (n *eggsNode) Unlink(ctx context.Context, name string) syscall.Errno { +func (n *ternNode) Unlink(ctx context.Context, name string) syscall.Errno { logger.Debug("unlink dir=%v, name=%v", n.id, name) lookupResp := msgs.LookupResp{} @@ -592,7 +592,7 @@ func (n *eggsNode) Unlink(ctx context.Context, name string) syscall.Errno { return shardRequest(n.id.Shard(), &unlinkReq, &msgs.SoftUnlinkFileResp{}) } -func (n *eggsNode) Rmdir(ctx context.Context, name string) syscall.Errno { +func (n *ternNode) Rmdir(ctx context.Context, name string) syscall.Errno { logger.Debug("rmdir dir=%v, name=%v", n.id, name) lookupResp := msgs.LookupResp{} if err := shardRequest(n.id.Shard(), &msgs.LookupReq{DirId: n.id, Name: name}, &lookupResp); err != 0 { @@ -607,7 +607,7 @@ func (n *eggsNode) Rmdir(ctx context.Context, name string) syscall.Errno { return cdcRequest(&unlinkReq, &msgs.SoftUnlinkDirectoryResp{}) } -func (n *eggsNode) Symlink(ctx context.Context, target, name string, out *fuse.EntryOut) (node *fs.Inode, errno syscall.Errno) { +func (n *ternNode) Symlink(ctx context.Context, target, name string, out *fuse.EntryOut) (node *fs.Inode, errno syscall.Errno) { logger.Debug("symlink dir=%v, target=%v, name=%v", n.id, target, name) tf, err := n.createInternal(name, 0, syscall.S_IFLNK) if err != 0 { @@ -619,10 +619,10 @@ func (n *eggsNode) Symlink(ctx context.Context, target, name string, out *fuse.E if err := tf.Flush(ctx); err != 0 { return nil, err } - return n.NewInode(ctx, &eggsNode{id: tf.id}, fs.StableAttr{Ino: uint64(tf.id), Mode: syscall.S_IFLNK}), 0 + return n.NewInode(ctx, &ternNode{id: tf.id}, fs.StableAttr{Ino: uint64(tf.id), Mode: syscall.S_IFLNK}), 0 } -func (n *eggsNode) Readlink(ctx context.Context) ([]byte, syscall.Errno) { +func (n *ternNode) Readlink(ctx context.Context) ([]byte, syscall.Errno) { logger.Debug("readlink file=%v", n.id) if n.id.Type() != msgs.SYMLINK { @@ -631,28 +631,28 @@ func (n *eggsNode) Readlink(ctx context.Context) ([]byte, syscall.Errno) { fileReader, err := c.ReadFile(logger, bufPool, n.id) if err != nil { - return nil, eggsErrToErrno(err) + return nil, ternErrToErrno(err) } bs, err := io.ReadAll(fileReader) if err != nil { - return nil, eggsErrToErrno(err) + return nil, ternErrToErrno(err) } return bs, 0 } -var _ = (fs.InodeEmbedder)((*eggsNode)(nil)) -var _ = (fs.NodeLookuper)((*eggsNode)(nil)) -var _ = (fs.NodeReaddirer)((*eggsNode)(nil)) -var _ = (fs.NodeMkdirer)((*eggsNode)(nil)) -var _ = (fs.NodeGetattrer)((*eggsNode)(nil)) -var _ = (fs.NodeCreater)((*eggsNode)(nil)) -var _ = (fs.NodeSetattrer)((*eggsNode)(nil)) -var _ = (fs.NodeRenamer)((*eggsNode)(nil)) -var _ = (fs.NodeOpener)((*eggsNode)(nil)) -var _ = (fs.NodeUnlinker)((*eggsNode)(nil)) -var _ = (fs.NodeRmdirer)((*eggsNode)(nil)) -var _ = (fs.NodeSymlinker)((*eggsNode)(nil)) -var _ = (fs.NodeReadlinker)((*eggsNode)(nil)) +var _ = (fs.InodeEmbedder)((*ternNode)(nil)) +var _ = (fs.NodeLookuper)((*ternNode)(nil)) +var _ = (fs.NodeReaddirer)((*ternNode)(nil)) +var _ = (fs.NodeMkdirer)((*ternNode)(nil)) +var _ = (fs.NodeGetattrer)((*ternNode)(nil)) +var _ = (fs.NodeCreater)((*ternNode)(nil)) +var _ = (fs.NodeSetattrer)((*ternNode)(nil)) +var _ = (fs.NodeRenamer)((*ternNode)(nil)) +var _ = (fs.NodeOpener)((*ternNode)(nil)) +var _ = (fs.NodeUnlinker)((*ternNode)(nil)) +var _ = (fs.NodeRmdirer)((*ternNode)(nil)) +var _ = (fs.NodeSymlinker)((*ternNode)(nil)) +var _ = (fs.NodeReadlinker)((*ternNode)(nil)) var _ = (fs.FileWriter)((*transientFile)(nil)) var _ = (fs.FileFlusher)((*transientFile)(nil)) @@ -786,7 +786,7 @@ func main() { bufPool = lib.NewBufPool() - root := eggsNode{ + root := ternNode{ id: msgs.ROOT_DIR_INODE_ID, } fuseOptions := &fs.Options{ @@ -794,8 +794,8 @@ func main() { AttrTimeout: fileAttrCacheTimeFlag, EntryTimeout: dirAttrCacheTimeFlag, MountOptions: fuse.MountOptions{ - FsName: "eggsfs", - Name: "eggsfuse" + mountPoint, + FsName: "ternfs", + Name: "ternfuse" + mountPoint, MaxWrite: 1<<20, MaxReadAhead: 1<<20, DisableXAttrs: true, diff --git a/go/eggsgc/eggsgc.go b/go/terngc/terngc.go similarity index 99% rename from go/eggsgc/eggsgc.go rename to go/terngc/terngc.go index 5a4257d8..70302a3a 100644 --- a/go/eggsgc/eggsgc.go +++ b/go/terngc/terngc.go @@ -11,11 +11,11 @@ import ( "path" "sync/atomic" "time" - "xtx/eggsfs/cleanup" - "xtx/eggsfs/client" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" - "xtx/eggsfs/wyhash" + "xtx/ternfs/cleanup" + "xtx/ternfs/client" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" + "xtx/ternfs/wyhash" "net/http" _ "net/http/pprof" diff --git a/go/eggsrun/eggsrun.go b/go/ternrun/ternrun.go similarity index 94% rename from go/eggsrun/eggsrun.go rename to go/ternrun/ternrun.go index c55a0468..a3f612c1 100644 --- a/go/eggsrun/eggsrun.go +++ b/go/ternrun/ternrun.go @@ -1,4 +1,4 @@ -// Utility to quickly bring up a full eggsfs, including all its components, +// Utility to quickly bring up a full ternfs, including all its components, // while hopefully not leaking processes left and right when this process dies. package main @@ -9,10 +9,10 @@ import ( "path" "runtime" "time" - "xtx/eggsfs/client" - "xtx/eggsfs/lib" - "xtx/eggsfs/managedprocess" - "xtx/eggsfs/msgs" + "xtx/ternfs/client" + "xtx/ternfs/lib" + "xtx/ternfs/managedprocess" + "xtx/ternfs/msgs" ) func noRunawayArgs() { @@ -26,7 +26,7 @@ func main() { buildType := flag.String("build-type", "release", "C++ build type") verbose := flag.Bool("verbose", false, "") trace := flag.Bool("trace", false, "") - dataDir := flag.String("data-dir", "", "Directory where to store the EggsFS data. If not present a temporary directory will be used.") + dataDir := flag.String("data-dir", "", "Directory where to store the TernFS data. If not present a temporary directory will be used.") failureDomains := flag.Uint("failure-domains", 16, "Number of failure domains.") hddBlockServices := flag.Uint("hdd-block-services", 2, "Number of HDD block services per failure domain.") flashBlockServices := flag.Uint("flash-block-services", 2, "Number of HDD block services per failure domain.") @@ -67,7 +67,7 @@ func main() { } if *dataDir == "" { - dir, err := os.MkdirTemp("", "eggsrun.") + dir, err := os.MkdirTemp("", "ternrun.") if err != nil { panic(fmt.Errorf("could not create tmp data dir: %w", err)) } @@ -103,14 +103,14 @@ func main() { var goExes *managedprocess.GoExes if *binariesDir != "" { cppExes = &managedprocess.CppExes{ - ShardExe: path.Join(*binariesDir, "eggsshard"), - CDCExe: path.Join(*binariesDir, "eggscdc"), + ShardExe: path.Join(*binariesDir, "ternshard"), + CDCExe: path.Join(*binariesDir, "terncdc"), } goExes = &managedprocess.GoExes{ - ShuckleExe: path.Join(*binariesDir, "eggsshuckle"), - BlocksExe: path.Join(*binariesDir, "eggsblocks"), - FuseExe: path.Join(*binariesDir, "eggsfuse"), - ShuckleProxyExe: path.Join(*binariesDir, "eggsshuckleproxy"), + ShuckleExe: path.Join(*binariesDir, "ternshuckle"), + BlocksExe: path.Join(*binariesDir, "ternblocks"), + FuseExe: path.Join(*binariesDir, "ternfuse"), + ShuckleProxyExe: path.Join(*binariesDir, "ternshuckleproxy"), } } else { fmt.Printf("building shard/cdc/blockservice/shuckle\n") diff --git a/go/eggss3/eggss3.go b/go/terns3/terns3.go similarity index 85% rename from go/eggss3/eggss3.go rename to go/terns3/terns3.go index 865fea07..46be65d5 100644 --- a/go/eggss3/eggss3.go +++ b/go/terns3/terns3.go @@ -6,10 +6,10 @@ import ( "net/http" "os" "strings" - "xtx/eggsfs/client" - "xtx/eggsfs/lib" - "xtx/eggsfs/msgs" - "xtx/eggsfs/s3" + "xtx/ternfs/client" + "xtx/ternfs/lib" + "xtx/ternfs/msgs" + "xtx/ternfs/s3" ) // bucketFlag is a custom flag type to handle multiple "-bucket" arguments. @@ -29,7 +29,7 @@ func main() { flag.Var(&buckets, "bucket", "Bucket mapping in format :. Can be repeated.") virtualHost := flag.String("virtual", "", "Domain for virtual host-style requests, e.g., 's3.example.com'") addr := flag.String("addr", ":8080", "Address and port to listen on") - eggsfsAddr := flag.String("eggsfs", "localhost:10001", "Address of the eggsfs metaserver") + ternfsAddr := flag.String("ternfs", "localhost:10001", "Address of the TernFS metaserver") verbose := flag.Bool("verbose", false, "") trace := flag.Bool("trace", false, "") flag.Parse() @@ -57,11 +57,11 @@ func main() { c, err := client.NewClient( log, nil, - *eggsfsAddr, + *ternfsAddr, msgs.AddrsInfo{}, ) if err != nil { - fmt.Fprintf(os.Stderr, "Failed to create eggsfs client: %v", err) + fmt.Fprintf(os.Stderr, "Failed to create TernFS client: %v", err) os.Exit(1) } @@ -69,7 +69,7 @@ func main() { for _, b := range buckets { parts := strings.SplitN(b, ":", 2) if len(parts) != 2 || parts[0] == "" || parts[1] == "" { - fmt.Fprint(os.Stderr, "Invalid bucket format %q. Expected :", b) + fmt.Fprintf(os.Stderr, "Invalid bucket format %q. Expected :", b) os.Exit(2) } bucketName, rootPath := parts[0], parts[1] diff --git a/go/eggsshuckle/base.html b/go/ternshuckle/base.html similarity index 87% rename from go/eggsshuckle/base.html rename to go/ternshuckle/base.html index 6cf2e0af..2ba61426 100644 --- a/go/eggsshuckle/base.html +++ b/go/ternshuckle/base.html @@ -9,7 +9,7 @@ - EggsFS — {{.Title}} + TernFS — {{.Title}} diff --git a/go/eggsshuckle/bootstrap.5.0.2.min.css b/go/ternshuckle/bootstrap.5.0.2.min.css similarity index 100% rename from go/eggsshuckle/bootstrap.5.0.2.min.css rename to go/ternshuckle/bootstrap.5.0.2.min.css diff --git a/go/eggsshuckle/directory.html b/go/ternshuckle/directory.html similarity index 100% rename from go/eggsshuckle/directory.html rename to go/ternshuckle/directory.html diff --git a/go/eggsshuckle/error.html b/go/ternshuckle/error.html similarity index 100% rename from go/eggsshuckle/error.html rename to go/ternshuckle/error.html diff --git a/go/eggsshuckle/file.html b/go/ternshuckle/file.html similarity index 100% rename from go/eggsshuckle/file.html rename to go/ternshuckle/file.html diff --git a/go/eggsshuckle/index.html b/go/ternshuckle/index.html similarity index 90% rename from go/eggsshuckle/index.html rename to go/ternshuckle/index.html index 617a1481..83185248 100644 --- a/go/eggsshuckle/index.html +++ b/go/ternshuckle/index.html @@ -14,7 +14,7 @@

- The most common occasion where a human sets block services flags is disk failure, see the docs for more info. + The most common occasion where a human sets block services flags is disk failure, see the docs for more info.

Loading...