* allow us to configure different repos
* make the struct contents public
* pass in config values to new log repo
* rename functions - possibly breaking changes so lets discuss
* make the logging backend configurable
* fix tests
* don't allow calls to WithAdditionalConfig
* cleanup
* replace sc with server
Co-authored-by: abelanger5 <belanger@sas.upenn.edu>
* rename sc to server
* add a LRU cache for the step run lookup
* lets not use an expirable cache and just use the regular one - we cannot close the go func in exirable
---------
Co-authored-by: abelanger5 <belanger@sas.upenn.edu>
* add a bunch of default headers
* add a check on the emails so we don't resend if we have a valid invite in future
* lets people invite for a new role
* add in some logging so we have more visibility on what is hapening here
* Add a limit to the number of pending invites a user can have. Add comments for the various headers
* adding a /version endpoint for the engine and a /api/v1/version endpoint for the API
* make the security optional so we don't get redirected for having auth
* lint
* upgrade protoc to the latest available version on brew
* use useQuery and clean up html
* add a dynamic strategy for flushing where we make the trigger for flush a funciton of the depth of the concurrency
* default value for tests and New for FlushStrategy
* clean up the currently flushing locking and add deadlock.Mutex
* don't wait as long for the buffer
* lets see if this 2ms thing is what is causing things to break
* lets error for this to see if we are actually hitting these limits
* put a really short deadline on the lock timeout to see if github actions will blow up
* lets use RW mutexs se we don't block as much
* lets extend this out to 100ms
* lets just do fewer locks
* add a lock to prevent a queue behind the semaphore
* deal with potential data races
* a simpler loop fib and now locks
* lets get rid of the wait for flush
* remove the deadlock stuff
* mod tidy
---------
Co-authored-by: Sean Reilly <sean@hatchet.run>
* add multiple rate limiter in grpc using a token bucket
* PR feedback
* add in client retry for go client
* update test files
* remove log line only retry on ResourceExhausted and Unavailable
* add some concurrency limits so we don't swamp ourselves
* add some logging for when we are getting backed up
* lets not queue up when we are too full to prevent OOM problems
* fix spelling
* add config options for maximum concurrent and how long to wait for flush , let the wait for flush setting be used as back pressure and a signal to writers that we are slowing up
* lots of changes to buffering
* fix data race
* add some comments explaing how this works, change errors to be ResourceExhausted now that we have client retry and limit how many gofuncs we can create on cleanup and wait for them to finish before we exit
* hooking up the config values so they go to the right place
* Update config.go to default to 1 ms waitForFlush
* disable grpc_retry for client streams
* explicitly set the limit if it is 0
* weirdness because we were using an older version of the lib
---------
Co-authored-by: Sean Reilly <sean@hatchet.run>
Co-authored-by: Alexander Belanger <alexander@hatchet.run>
* add some concurrency limits so we don't swamp ourselves
* lets not queue up when we are too full to prevent OOM problems
* add config options for maximum concurrent and how long to wait for flush , let the wait for flush setting be used as back pressure and a signal to writers that we are slowing up
---------
Co-authored-by: Sean Reilly <sean@hatchet.run>
* add a serial write for step run events
* update other problematic queries
* tmp: don't upsert queue
* add SerialBuffer to the config
* revert the change to config
* fix: add back queue upsert
* add statement timeout to upsert queue
---------
Co-authored-by: Sean Reilly <sean@hatchet.run>
Co-authored-by: Alexander Belanger <alexander@hatchet.run>
- Simplifies architecture for splitting engine services into different components. The three supported services are now `grpc-api`, `scheduler`, and `controllers`. The `grpc-api` service is the only one which needs to be exposed for workers. The other two can run as unexposed services.
- Fixes a set of bugs and race conditions in the `v2` scheduler
- Adds a `lastActive` time to the `Queue` table and includes a migration which sets this `lastActive` time for the most recent 24 hours of queues. Effectively this means that the max scheduling time in a queue is 24 hours.
- Rewrites the `ListWorkflowsForEvent` query to improve performance and select far fewer rows.
* feat(throughput): single process per queue
* fix data race
* fix: golint and data race on load test
* wrap up initial v2 scheduler
* fix: more debug logs and tighten channel logic/blocking sends
* improved casing on dispatcher and lease manager
* fix: data race on min id
* increase wait on load test, fix data race
* fix: trylock -> lock
* clean up queue when no longer in set
* fix: clean up cache on exit
* ensure cleanup is only called once
* address review comments
* (wip) handle step run updates without deferred updates
* refactor: buffered writes of step run statuses
* fix: add more safety on tenant pools
* add configurable flush period, remove wait for started
* flush immediately if last flush time plus flush period is in the past
* feat: add configurable flush internal/max items
Refactors the queueing logic to be fairly balanced between actions, with each action backed as a separate FIFO queue. Also adds support for priority queueing and custom queues, though those aren't exposed on the API layer yet. Improves throughput to be > 5000 tasks/second on a single queue.
---------
Co-authored-by: Alexander Belanger <alexander@hatchet.run>
* feat: allow extending the api server
* chore: remove internal packages to pkg
* chore: update db_gen.go
* fix: expose auth
* fix: move logger to pkg
* fix: don't generate gitignore for prisma client
* fix: allow extensions to register their own api spec
* feat: expose pool on server config
* fix: nil pointer exception on empty opts
* fix: run.go file