This started being a problem since the block service update log
entry does not fit in a UDP packet (it's like 100KB). I think this
approach makes more sense anyway. See comment for `getCache()` for
gotchas.
Just check if we're also unable to count the blocks for the disk,
and if yes, assume it's a single file error.
Of course there will be a time period where we will not have detected
the bad disk when counting the blocks (a few minutes at most), but
that's OK -- the scrubber will scrub blocks for that period, and then
stop.
Once <internal-repo/issues/65#issuecomment-24747>
is done, we should use whatever error detection we use for migration
to also distinguish between these errors.
This is in preparation for #44, but more immediately, to better
stop writing to full block services.
The previous strategy of setting a flag was flawed since once
the flag was set it stayed set -- i.e. we would not remove it once
files would be deleted. This consideration should just be integrated
in distributing the block services.
I had copied the LIFO pattern from ETD codebase, but it's not needed
here given that the loop terminates gracefully and so we can coordinate
explicitly if needed.
See <https://mazzo.li/posts/stopping-linux-threads.html> for tradeoffs
regarding how to terminate threads gracefully.
The goal of this work was for valgrind to work correctly, which in turn
was to investigate #141. It looks like I have succeeded:
==2715080== Warning: unimplemented fcntl command: 1036
==2715080== 20,052 bytes in 5,013 blocks are definitely lost in loss record 133 of 135
==2715080== at 0x483F013: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==2715080== by 0x3B708E: allocate (new_allocator.h:121)
==2715080== by 0x3B708E: allocate (allocator.h:173)
==2715080== by 0x3B708E: allocate (alloc_traits.h:460)
==2715080== by 0x3B708E: _M_allocate (stl_vector.h:346)
==2715080== by 0x3B708E: std::vector<Crc, std::allocator<Crc> >::_M_default_append(unsigned long) (vector.tcc:635)
==2715080== by 0x42BF1C: resize (stl_vector.h:940)
==2715080== by 0x42BF1C: ShardDBImpl::_fileSpans(rocksdb::ReadOptions&, FileSpansReq const&, FileSpansResp&) (shard/ShardDB.cpp:921)
==2715080== by 0x420867: ShardDBImpl::read(ShardReqContainer const&, ShardRespContainer&) (shard/ShardDB.cpp:1034)
==2715080== by 0x3CB3EE: ShardServer::_handleRequest(int, sockaddr_in*, char*, unsigned long) (shard/Shard.cpp:347)
==2715080== by 0x3C8A39: ShardServer::step() (shard/Shard.cpp:405)
==2715080== by 0x40B1E8: run (core/Loop.cpp:67)
==2715080== by 0x40B1E8: startLoop(void*) (core/Loop.cpp:37)
==2715080== by 0x4BEA258: start_thread (in /usr/lib/libpthread-2.33.so)
==2715080== by 0x4D005E2: clone (in /usr/lib/libc-2.33.so)
==2715080==
==2715080==
==2715080== Exit program on first error (--exit-on-first-error=yes)
The idea is to drain the socket and do a single RocksDB WAL
write/fsync for all the write requests we have found.
The read requests are immediately executed. The reasoning here is
that currently write requests are _a lot_ slower than the read
requests because fsyncing takes ~500us on fsf1. In the future this
might change.
Since we're at it, we also use batch UDP syscalls in the CDC.
Fixes#119.
The goal here is to not have constant wakeups due to timeout. Do
not attempt to clean things up nicely before termination -- just
terminate instead. We can setup a proper termination system in
the future, I first want to see if this makes a difference.
Also, change xmon to use pipes for communication, so that it can
wait without timers as well.
Also, `write` directly for logging, so that we know the logs will
make it to the file after the logging call returns (since we now
do not have the chance to flush them afterwards).
This is to save on a ton of writes as jobs stat tons of files.
It would maybe be a bit cleaner to do it in the kmod, but this is
much quicker.
Thanks to @sgrusny for the good idea.
The `tuple` was for when I thought it'd be useful to leave slots
for each request, but we don't need this anymore, and now leading
up to #66 I want to be able to keep vectors of reqs/resps.
Fixes#32. This also involves some reworking of the block request machinery
to make it more robust and faster. The scrubbing is done assuming that
the overwhelming majority of block checking will go through.