Like e45c61d6f0, but for .project.
I originally thought `SET_NULL` would be a good way to "do stuff later", but
that's only so the degree that [1] updates are cheaper than deletes and [2]
2nd-order effects (further deletes in the dep-tree) are avoided.
Now that we have explicit Project-deletion (deps-first, delayed, properly batched)
the SET_NULL behavior is always a no-op (but with cost in queries).
As a result, in the test for project deletion (which has deletes for many
of the altered models), the following 12 queries are no longer done:
```
SELECT "projects_project"."id", [..many fields..] FROM "projects_project" WHERE "projects_project"."id" = 1
DELETE FROM "projects_projectmembership" WHERE "projects_projectmembership"."project_id" IN (1)
DELETE FROM "alerts_messagingserviceconfig" WHERE "alerts_messagingserviceconfig"."project_id" IN (1)
UPDATE "releases_release" SET "project_id" = NULL WHERE "releases_release"."project_id" IN (1)
UPDATE "issues_issue" SET "project_id" = NULL WHERE "issues_issue"."project_id" IN (1)
UPDATE "issues_grouping" SET "project_id" = NULL WHERE "issues_grouping"."project_id" IN (1)
UPDATE "events_event" SET "project_id" = NULL WHERE "events_event"."project_id" IN (1)
UPDATE "tags_tagkey" SET "project_id" = NULL WHERE "tags_tagkey"."project_id" IN (1)
UPDATE "tags_tagvalue" SET "project_id" = NULL WHERE "tags_tagvalue"."project_id" IN (1)
UPDATE "tags_eventtag" SET "project_id" = NULL WHERE "tags_eventtag"."project_id" IN (1)
UPDATE "tags_issuetag" SET "project_id" = NULL WHERE "tags_issuetag"."project_id" IN (1)
```
I originally thought `SET_NULL` would be a good way to "do stuff later", but
that's only so the degree that [1] updates are cheaper than deletes and [2]
2nd-order effects (further deletes in the dep-tree) are avoided.
Now that we have explicit Issue-deletion (deps-first, delayed, properly batched)
the SET_NULL behavior is always a no-op (but with cost in queries).
As a result, in the test for issue deletion (which has deletes for many
of the altered models), the following 8 queries are no longer done:
```
SELECT "issues_grouping"."id", [..many fields..] FROM "issues_grouping" WHERE "issues_grouping"."id" IN (1)
UPDATE "events_event" SET "grouping_id" = NULL WHERE "events_event"."grouping_id" IN (1)
[.. a few moments later..]
SELECT "issues_issue"."id", [..many fields..] FROM "issues_issue" WHERE "issues_issue"."id" = 'uuid'
UPDATE "issues_grouping" SET "issue_id" = NULL WHERE "issues_grouping"."issue_id" IN ('uuid')
UPDATE "issues_turningpoint" SET "issue_id" = NULL WHERE "issues_turningpoint"."issue_id" IN ('uuid')
UPDATE "events_event" SET "issue_id" = NULL WHERE "events_event"."issue_id" IN ('uuid')
UPDATE "tags_eventtag" SET "issue_id" = NULL WHERE "tags_eventtag"."issue_id" IN ('uuid')
UPDATE "tags_issuetag" SET "issue_id" = NULL WHERE "tags_issuetag"."issue_id" IN ('uuid')
```
(breaks the tests b/c of constraints and not always using factories; will fix next)
Problem: on mysql `make_consistent` cannot always clean up `Event`s, because
`EventTag` objects still point to them, leading to an integrityerror.
The problem does not happen for `sqlite`, because sqlite does FK-checks
on-commit. And the offending `EventTag` objects are "eventually cleaned up" (in
the same transaction, in make_consistent)
This is the "mostly works" solution, for the scenario we've encountered.
Namely: remove EventTags which have no issue before removing Events. This works
in practice because of the way Events-to-cleanup were created in the UI in
practice, namely by removal of some Issue in the admin, triggering a `SET_NULL`
on the `issue_id`. Removal of issue implies an analagous `SET_NULL` on the
`EventTag`'s `issue_id`, and by removing those `EventTag`s before proceeding
with the `Event`s, you avoid the FK constraint triggering.
We don't want to fully reimplement `CASCADE` (as in Django) here, and the
values of `on_delete` are "Design Decision Needed" and non-homogonous anyway,
and we might soon implement proper deletions (see #50) anyway, so the "mostly
works" solution will have to do for now.
Fixes#132
in absence of a clean reproducer, just doing proper lookups on a
normalized thing seems the more stable thing to do.
Fix#105 (in absence of test: I think)
this error has shown up for one of our users; I can't reproduce yet, but I can
make it better:
* log-don't-crash: not worth failing for this (drops the event, and also
rolls back the transaction such that nothing is achieved regarding eviction)
* provide more info on-error (various counts)
NB: I've also changed the < into a <=, and combined it with a check on "loop
not done". I _think_ they are functionally equivalent, and that the new version
is simply more clear as well as slightly more efficient.
In my understanding: the old version simply looped one more time before giving
up (because it was < it needed one more iteration, and because there was no
explicit check on 'loop done' that inefficiency was needed in the old formulation).
I say "I think" because I don't have a test specific to the edge-case.
this command to store all events on the local filesystem was useful
while 'scaffolding'; getting my hands on some initial event-data in
the early days of Bugsink, but it was never meant as a permanent tool