Only loop through the peers when absolutely needed. Don't loop through more peers after we reach the limit. Do logic before looping instead of inside it.
Cache busting seems to be working enough that we probably don't need this fail-safe, but we can bump it up a bit at least. The key and value (<50 bytes for non-existant values) are small enough that we don't have to worry about running out of memory with the current rate limit in case of a few malicious users.
Co-authored-by: HDVinnie <hdinnovations@protonmail.com>
When the queue system was used, Laravel's queue system required all variables to be json-encodable, which meant encoding binary strings in either hex or base64. Now that we have inlined the queue, we can use the binary data directly without encoding.
Now that peers are only updated every 5 seconds, we need to cache their last announce timestamp in redis instead of relying on the updated_at timestamp in the database
Now that we upsert history records without first selecting them, we can't rely on storing a peer's last uploaded/downloaded values in the history record to determine the user's uploaded/downloaded delta between the last announce. If a user has internet issues for a brief period of time but their client continues working, then their change of upload/download between the two announces needs to be kept track of. This is usually kept track of in the peer record, but if the peer is deleted after 2 hours of not announcing, then their last uploaded/downloaded data is deleted with it. We previously stored this data in the history table to handle such cases but this became erroneous if the user had multiple peers on a torrent. This new solution keeps the peers in the database for 2 days before concluding that the peer isn't coming back and deletes the peer permanently. After which point, a new peer will be created and an assumption is made that they uploaded/downloaded 0 data within their downtime.
laravel by default automatically serializes models in a custom way when inserted into a job queue and fetches the model from the database again when the job is ran. We would rather not have these extra queries, so we can't pass in models, or data that is already serialized (as laravel will try to deserailize our serialization too and fail), so we opted for arrays, and filling in the properties into a new model again once the job is handled.
The `SerializesModels` trait fetches a new copy of the record from the database, causing 4 more queries than we thought we were using. This change reduces the query time in the ProcessAnnounce job by 55%.