2024-02-22 16:11:29 -08:00
2024-01-20 09:18:25 -05:00
2024-02-13 21:34:16 -05:00
2024-02-22 16:11:29 -08:00
2024-02-22 15:15:07 -05:00
2024-02-21 18:54:42 -08:00
2024-02-13 21:34:16 -05:00
2024-02-22 15:14:57 -05:00
2024-02-13 21:34:16 -05:00
2024-02-22 14:58:21 -05:00
2024-02-22 15:14:57 -05:00

Docs Discord License: MIT Go Reference

Introduction

Hatchet is a self-hostable platform which lets you define and scale workflows as code.

You run your workers, Hatchet manages the rest.

Hatchet is an orchestrator, which means it manages the execution of your workflows. The individual steps of each workflow are executed by your own workers (don't worry, each SDK comes with a worker implementation). This means you can run your workers in your own infrastructure, and Hatchet will manage the scheduling, retries, and monitoring of your workflows. Hatchet then provides a full observability layer and dashboard for debugging and retrying failed executions, along with an API for programmatically managing workflows.

Use-Cases

While Hatchet is generalized and ideal for many low-latency workflow tasks, it is particularly useful in the following cases:

Background Task Management and Scheduling

Instead of developers interfacing directly with a task queue, Hatchet provides a simple API built into each SDK for managing background tasks. It comes with the following features:

  • Retries, timeouts and error handling are built into each Hatchet SDK.

  • Cron schedules and scheduled workflows schedule workflows using a crontab syntax, like */15 * * * * (every 15 minutes). You can set multiple crons per workflows, or schedule one-off workflows in the future.

  • Task observability with Hatchet, you get complete access to the inputs and outputs from each step run, which is useful for debugging and observability.

Prompt Engineering Platform

Hatchet lets you expose the existing methods you've built in your LLM-enabled applications on a UI for better observability and prompt iteration. It looks something like this:

https://github.com/hatchet-dev/hatchet/assets/25448214/e4522c16-3599-4fad-b4ce-ff8ae614b074

We've built the following features to improve the prompt engineering experience:

Event-Driven Architectures

Because Hatchet is designed for low-latency and stores the history of every step execution, it's ideal for event-driven architectures with events triggering across multiple workers and services. It includes the following:

  • Event-triggered workflows - workflows can be triggered from any event within your system via user-defined event keys.

  • Durable event log - get a full history of events within your system that triggered workflows, with an Events API for pushing and pulling events.

  • Logically organize your services - each worker can run its own set of workflows, so you can organize your worker pools to only pickup certain types of tasks.

Getting Started

To get started, see the Hatchet documentation here, or check out our quickstart repos:

Issues

Please submit any bugs that you encounter via Github issues. However, please reach out on Discord before submitting a feature request - as the project is very early, we'd like to build a solid foundation before adding more complex features.

I'd Like to Contribute

See the contributing docs here, and please let us know what you're interesting in working on in the #contributing channel on Discord. This will help us shape the direction of the project and will make collaboration much easier!

Description
🪓 Run Background Tasks at Scale
Readme MIT 183 MiB
Languages
Go 33.2%
Python 29.8%
TypeScript 18.9%
MDX 11.6%
PLpgSQL 3.7%
Other 2.7%