Makes announce processing job code easier to deal with. There is negligible performance loss (<5%) by querying the peer count realtime on the torrents and torrent pages.
When a user
1) completes a torrent (downloaded = x),
2) immediately stops it (downloaded = x),
3) resumes it (downloaded = 0) within the next 30 seconds,
then, during (1), the downloaded delta will be x, but (2) is considered overlapping (1), so it is delayed 30 seconds, by which time (3) will get processed first, having a delta of 0. After the 30 seconds are up, (2) will be processed, but since the previous processed announce (3) had a downloaded of 0, its delta will be x, causing the download to be doubled.
The same happens with upload.
The peer must be selected while we hold the `WithoutOverlapping()` middleware lock. Previously, the peer was selected beforehand, which caused previous data (single digit seconds sooner) to be used to calculate the delta in the case where a user quickly sent a stopped event right after a completed event. Selecting the peer during the lock means that the data is selected and updated before any other job can process its own delta.
Sometimes, for whatever reason, a user might have more peers than their slot limit. This occurs because we don't ensure every user completes every download they start. This means a user could stop a torrent, clear the slot, and start a new one, and then resume the first one using peers cached by the client. When a user did this, the peer that was stopped would no longer be shown by the tracker and stats weren't affected. When the user completed the torrent, they got a error saying they couldn't send a completed event without first sending a started event. This could only be resolved by restarting the client or pausing/resuming the torrent which would reset the stats for that torrent session. This PR accounts for this fact and will allow peers updates to continue, but the user will no longer be able to receive peers in the peer list, and other users won't receive their peer in the list.
There were very minor edge cases with the previous system that had issues when users paused a torrent immediately after they completed it, causing data to be duplicated. We now opt to use DTOs to transfer the data into the job queue instead of models to prevent the extra database queries caused by passing models into a job queue.
Updating all of a user's peers to either be connectable or unconnectable causes deadlocks when updated at the same time as a bulk peer upsert. We need to combine the peer connectability updates with the regular bulk peer upserts.
laravel by default automatically serializes models in a custom way when inserted into a job queue and fetches the model from the database again when the job is ran. We would rather not have these extra queries, so we can't pass in models, or data that is already serialized (as laravel will try to deserailize our serialization too and fail), so we opted for arrays, and filling in the properties into a new model again once the job is handled.