Implemented using a batch-wise dependency-scanner in delayed
(snappea) style.
* no real point-of-entry in the (regular, non-admin) UI yet.
* no hiding of Projects which are delete-in-progress from the UI
* lack of DRY
* some unnessary work (needed in the Issue-context, but not here)
is still being done.
See #50
I originally thought `SET_NULL` would be a good way to "do stuff later", but
that's only so the degree that [1] updates are cheaper than deletes and [2]
2nd-order effects (further deletes in the dep-tree) are avoided.
Now that we have explicit Issue-deletion (deps-first, delayed, properly batched)
the SET_NULL behavior is always a no-op (but with cost in queries).
As a result, in the test for issue deletion (which has deletes for many
of the altered models), the following 8 queries are no longer done:
```
SELECT "issues_grouping"."id", [..many fields..] FROM "issues_grouping" WHERE "issues_grouping"."id" IN (1)
UPDATE "events_event" SET "grouping_id" = NULL WHERE "events_event"."grouping_id" IN (1)
[.. a few moments later..]
SELECT "issues_issue"."id", [..many fields..] FROM "issues_issue" WHERE "issues_issue"."id" = 'uuid'
UPDATE "issues_grouping" SET "issue_id" = NULL WHERE "issues_grouping"."issue_id" IN ('uuid')
UPDATE "issues_turningpoint" SET "issue_id" = NULL WHERE "issues_turningpoint"."issue_id" IN ('uuid')
UPDATE "events_event" SET "issue_id" = NULL WHERE "events_event"."issue_id" IN ('uuid')
UPDATE "tags_eventtag" SET "issue_id" = NULL WHERE "tags_eventtag"."issue_id" IN ('uuid')
UPDATE "tags_issuetag" SET "issue_id" = NULL WHERE "tags_issuetag"."issue_id" IN ('uuid')
```
(breaks the tests b/c of constraints and not always using factories; will fix next)
CASCADE was defined for keys & values, but in practice those are never directly
deleted except in the very case in which it has been established that they are
'orphaned', i.e. no longer being referrred to. That's exactly the case in which
CASCADE is superfluous.
As a result, in the test for issue deletion (which contains a prune of
tagvalue), the following 3 queries are no longer done:
```
SELECT "tags_tagvalue"."id", "tags_tagvalue"."project_id", "tags_tagvalue"."key_id", "tags_tagvalue"."value" FROM "tags_tagvalue" WHERE "tags_tagvalue"."id" IN (1)
DELETE FROM "tags_eventtag" WHERE "tags_eventtag"."value_id" IN (1)
DELETE FROM "tags_issuetag" WHERE "tags_issuetag"."value_id" IN (1)
```
An event always has a single (automatically calculated) Grouping associated with it.
We add this info to the Event model (we'll soon display it in the UI, and as per the
now-removed comment it's simply the consistent thing to do)
* As per the spec 'takes precendence'
* Also fixes the reported bug on Laravel, which apparently doesn't send event_id
as part of the event payload.
* Fixes the envelope tests (they were doing nothing when I moved the
data samples around recently)
* Adds a 'no event_id in data, but yes in envelope' test to that test.
* Adds handling to send_json such that we can send envelopes when the event_id
is missing from the event data.
This reverts course on 4201fbd778, and restores event.schema.json from that
commit. In that commit we said: 'this is not used'. Not true: it's used in a
test, though this test used the validity check to silently skip.
In this commit:
1. Do _not_ just silently skip invalid samples. Since we have a way of properly
validating, let's use that so that we know how useful the samples that we have
actually are.
2. Deal with "_meta", a field that we sometimes see in the "private samples" (data
that ultimately comes from running a somewhat recent python-sdk against my
actual codebase). The need for this was exposed by [1]
3. Add a test for the up-to-date-ness of event.json.schema
4. remove special-cased attribute-checks in `is_valid`; `send_json` was, at the
time, an opportunistic way to just get my hands on some sample data. the
approach at validation reflected that: I just did some tests on the existence
of certain attributes to determine which json files were even events. But in
the end I did a full validation using an API schema, which kinda made the
whole business useless. This commit cleans up the individual checks.
recently introduced (e59fd3a225) caching made it so that repeated calls to
create_release_if_needed no longer work if releases have been created in the
meantime.
In actual usage create_release_if_needed is always called on a single project
so I _think_ just calling `fresh` in the tests is good way to get to green
tests again.
Now that we have eviction, events may disappear. Deal with it:
* event-specific 404 that still allows for navigation
* first/last buttons
* navigation to prev/next when prev/next is not just 1 step away
* don't use HttpRedirect for "lookup based" views
in principle, the thing you were looking for might go missing in-between
drawback: these URLs are not "stable"
Replacing it with passing the thresholds on each call to `inc`.
The event-based approach was broken in a multi-process setup (such as having a separate
gunicorn and snappea), because the unmute events would be registered GUI-side
(gunicorn), and the single process where the counting happened had a different PC
instance.
The solution is to get rid of the event-listener approach, and just make an inventory of
the threshold-checks that need to be done right before each call to `inc`. Because the
calls to `inc` happen in a single process (we [will] enforce this elsewhere) this fixes
the problem.
During refactoring it became clear that this is probably a good idea anyway: many
comments about corner-cases could be removed.
Other things I found:
* The now-removed `_digest_event_python_postprocessing` did more than Python alone (it
also touched the DB for unmutes) so that was probably a separate bug (now fixed).
* In the event-listener-based code, I foresaw the need for `on_become_false` (but did
not use it yet). The idea was probably that this could be useful in the quota setting
(a quota can become unmet after a while) but in fact it isn't useful, because when a
quota becomes unmet you'd still need to check all quota and OR them.
Tests have not been truly refactored (the new architecture probably points to a new
desired set of tests) but rather have been made to run in the simplest way possible.
and make the type 'Log Message'.
This may just be a matter of personal taste, but I've always found these
messages-as-type super-confusing. I'd rather have some made up (but clear,
and thanks to the whitespace unlikely-to-otherwise-exist) type "Log Message".
This is also inline in what I've already done in the UI.