Introduction
Hatchet is a self-hostable platform which lets you define and scale workflows as code.
You run your workers, Hatchet manages the rest.
Hatchet is an orchestrator, which means it manages the execution of your workflows. The individual steps of each workflow are executed by your own workers (don't worry, each SDK comes with a worker implementation). This means you can run your workers in your own infrastructure, and Hatchet will manage the scheduling, retries, and monitoring of your workflows. Hatchet then provides a full observability layer and dashboard for debugging and retrying failed executions, along with an API for programmatically managing workflows.
Use-Cases
While Hatchet is generalized and ideal for many low-latency workflow tasks, it is particularly useful in the following cases:
Background Task Management and Scheduling
Instead of developers interfacing directly with a task queue, Hatchet provides a simple API built into each SDK for managing background tasks. It comes with the following features:
-
Retries, timeouts and error handling are built into each Hatchet SDK.
-
Cron schedules and scheduled workflows schedule workflows using a crontab syntax, like
*/15 * * * *(every 15 minutes). You can set multiple crons per workflows, or schedule one-off workflows in the future. -
Task observability with Hatchet, you get complete access to the inputs and outputs from each step run, which is useful for debugging and observability.
Prompt Engineering Platform
Hatchet lets you expose the existing methods you've built in your LLM-enabled applications on a UI for better observability and prompt iteration. It looks something like this:
https://github.com/hatchet-dev/hatchet/assets/25448214/e4522c16-3599-4fad-b4ce-ff8ae614b074
We've built the following features to improve the prompt engineering experience:
-
UI-based iteration of LLM workflows - you get full flexibility to choose which variables to expose on the playground. We do this by providing a method in our SDK called
playgroundwhich then exposes the variable in the Hatchet UI: -
Full observability into customer interactions with Hatchet, you automatically get a full history of the inputs and outputs to each step in your workflow, which is particularly useful when debugging bad customer interactions with your LLMs.
https://github.com/hatchet-dev/hatchet/assets/25448214/924510d9-3056-4ddf-a36a-3c2c719451df
-
Deploy changes to Github useful for non-technical founders and product managers to quickly request changes to your codebase without waiting for an engineer.
https://github.com/hatchet-dev/hatchet/assets/25448214/93e6f358-ac83-474f-8a0b-4c1e26f4f825
Event-Driven Architectures
Because Hatchet is designed for low-latency and stores the history of every step execution, it's ideal for event-driven architectures with events triggering across multiple workers and services. It includes the following:
-
Event-triggered workflows - workflows can be triggered from any event within your system via user-defined event keys.
-
Durable event log - get a full history of events within your system that triggered workflows, with an Events API for pushing and pulling events.
-
Logically organize your services - each worker can run its own set of workflows, so you can organize your worker pools to only pickup certain types of tasks.
Getting Started
To get started, see the Hatchet documentation here, or check out our quickstart repos:
Issues
Please submit any bugs that you encounter via Github issues. However, please reach out on Discord before submitting a feature request - as the project is very early, we'd like to build a solid foundation before adding more complex features.
I'd Like to Contribute
See the contributing docs here, and please let us know what you're interesting in working on in the #contributing channel on Discord. This will help us shape the direction of the project and will make collaboration much easier!