Now that we upsert history records without first selecting them, we can't rely on storing a peer's last uploaded/downloaded values in the history record to determine the user's uploaded/downloaded delta between the last announce. If a user has internet issues for a brief period of time but their client continues working, then their change of upload/download between the two announces needs to be kept track of. This is usually kept track of in the peer record, but if the peer is deleted after 2 hours of not announcing, then their last uploaded/downloaded data is deleted with it. We previously stored this data in the history table to handle such cases but this became erroneous if the user had multiple peers on a torrent. This new solution keeps the peers in the database for 2 days before concluding that the peer isn't coming back and deletes the peer permanently. After which point, a new peer will be created and an assumption is made that they uploaded/downloaded 0 data within their downtime.
laravel by default automatically serializes models in a custom way when inserted into a job queue and fetches the model from the database again when the job is ran. We would rather not have these extra queries, so we can't pass in models, or data that is already serialized (as laravel will try to deserailize our serialization too and fail), so we opted for arrays, and filling in the properties into a new model again once the job is handled.
The `SerializesModels` trait fetches a new copy of the record from the database, causing 4 more queries than we thought we were using. This change reduces the query time in the ProcessAnnounce job by 55%.
- github action updated with new ruleset in pint.json
- codebase linted with new ruleset
- contributors can now run `./vendor/bin/pint`
- action workflow will auto correct any lint issues upon commit/opened pull request