we now have handling for them, no need to keep seeing them as stacktraces.
also: in the EAGER setup, raising means the transaciton is rolled back,
and nothing is stored in the DB at all.
if we ever want to 'get more info' something like capture_or_log_exception
would be more apt
As per the parent commit: the "small check" is not bullet-proof (as per #99)
and in Docker/systemd environments it's better to leave the thing that's
actually in charge of lifecycles in charge rather than reproduce that behavior.
You can’t fail the check if you deliberately skipped it.
Fix#99
As implied by this comment:
> this implementation is not supposed to be bullet-proof for race conditions (nor is it cross-platform)... it's
> just a small check to prevent the regularly occurring cases:
> * starting a second runsnappea in development
> * running 2 separate instances of bugsink on a single machine without properly distinguishing them
but this "small check" gets in the way sometimes, so it's better to be able to turn it off.
See #99
on the subject of this being the 3rd time (or more) that I'm fixing this:
> even a donkey typically doesn't bump into the same stone twice (Dutch proverb)
Q: but what animal put the stone there in the first place?
A: Python's lanuage "designers"
in the correct timezone, with smaller milis
According to the spec, this should work because:
> The timestamp of the breadcrumb. Recommended. A timestamp representing when
> the breadcrumb occurred. The format is either a string as defined in [RFC
> 3339](https://tools.ietf.org/html/rfc3339) or a numeric (integer or float)
> value representing the number of seconds that have elapsed since the [Unix
> epoch](https://en.wikipedia.org/wiki/Unix_time). Breadcrumbs are most useful
> when they include a timestamp, as it creates a timeline leading up to an
> event.
the store API is deprecated and b/c it doesn't support the ingest/digest
split can be quite confusing.
this is similar to 2b8efc9452 (for the stress_test command the 'store'
API option was removed entirely)
Given that I rarely use this in practice, the potential advantages do not
weigh up aginst the actual disadvantages (breakage today, as well as in March,
see 38d49f5000)
Fix#168
'philosphofically' I prefer to keep my dev-deps in flux ('bleeding edge')
but since I barely use the DJDT I'd rather just pin it at a known-working
version.
Also: 6.0 introduces DB-models (for a debug tool) which I'm not a fan of.
Probably removing DJDT right after this, which would make this commit to
be a good point to revert to if we ever want to reintroduce it
See #168
for each bundle upload both the chunks and the zipped bundle
were kept (even though they are only needed on-upload, i.e.
after extracting we deal with the extracted files exclusively
This is an important step in 'keeping sourcemaps-related data-usage limited'
i.e. see #129
chunk_upload is and has always been working 'for real'. The only sense in
which the comment has been 'vaguely in the direction of truth' was that
with a chunkSize and maxRequestSize of 32MiB in practice sourcemap uploads
will often have been single-chunk in practice.
See #147
i.e. update the comments to reflect what I just learned doing some actual
experiments.
See #147
b.t.w. the now-removed comment was somewhat misleading: "single-chunk"
was (and is) being forced as in "single chunk per request" but not as
in "single chunk per file", and it was only forced by chunksPerRequest=1,
not by concurrency=1.
As per the comment.
Since we haven't actually gone multi-chunk, this is just preparation
The now-removed comment should be read as 'it could be assumed that
unzipping introduces a factor 5 increase between chunk size and file
size' but that's a whole bunch of assumptions that I'd rather get
rid of (mental overhead, with little gain).
See #147