Josh Duff 78e866c78f Feat: New "create organization" and "create tenant" interfaces (#3068)
* Tell eslint to ignore the generated code so that the npm run commands are usable

* Cleaning something I noticed while reading around

* An editorconfig so my editor knows when to use tabs and when to use spaces

* make it so you don't have to pass in unnecessary empty objects

* Two "new organization" pages – one for onboarding, one for after, both displaying the same form

* Fix a bug with asChild in the button component + make the "Create Organization" button a proper link

* the same form + save logic used in a "new organization" onboarding screen and a regular "new organization" screen

* not tested, but this tenant saver form looks like a reasonable starting place

* a "new tenant" modal that seems to work

* move minimumReleaseAge to a pnpm-specific config so npm doesn't complain at me when I reflexively type `npm run`

* automated unit tests for the frontend

* Rework the tenant+organization onboarding redirects, and use union types to try to make working with the organization in the app context more painless

* When onboarding, default the organization to the user's name

* isSaving doesn't need to be optional

https://github.com/hatchet-dev/hatchet/pull/3068#discussion_r2835696077

* wrap callback in useCallback

https://github.com/hatchet-dev/hatchet/pull/3068#discussion_r2835697163

* empty string is not the correct default

https://github.com/hatchet-dev/hatchet/pull/3068#discussion_r2835702156

* only set up the event listener once!

https://github.com/hatchet-dev/hatchet/pull/3068#discussion_r2835712324

* use tiny-invariant for assertions

https://github.com/hatchet-dev/hatchet/pull/3068#discussion_r2835707451

* [jedi hand move]

https://github.com/hatchet-dev/hatchet/pull/3068#discussion_r2835700130

* try a mutation

* I thiiiiink this is about what I want

* Finish getting the user-universe provider up to snuff, implement it in a few places.

Also fix a bug caused by the fact that some pages that assumed you were authenticated were not underneath the authenticated route

* change appContext to pull from user universe rather than querying tenants + organizations itself

also, rename the organization/tenant hooks isLoading->isUserUniverseLoaded

* We want resetQueries, not invalidateQueries

invalidate sets the query to stale, but leaves the data around and leaves isSuccess = true

* Make the user universe query dependent on isCloudEnabled + get the query client from context

https://github.com/hatchet-dev/hatchet/pull/3068#discussion_r2854606886
https://github.com/hatchet-dev/hatchet/pull/3068#discussion_r2854611980
https://github.com/hatchet-dev/hatchet/pull/3068#discussion_r2854614616

* get rid of an unpleasant `as`

https://github.com/hatchet-dev/hatchet/pull/3068#discussion_r2854620958

* fix a name that was too generic for what it was being used for

* Make the NewOrganizationSaverForm assertion message more useful

https://github.com/hatchet-dev/hatchet/pull/3068#discussion_r2854629289

* some embarrassing errors in the non-cloud flow

* fix: unwind pnpm changes

* Revert "fix: unwind pnpm changes"

This reverts commit 5df6a4b5d3.

* fix: add `packages` hack to pnpm-workspace files

* fix: shrink spacing

* fix: start improving styling of orgs page with tabs

* fix: start cleaning up UI

* fix: remove mobile views, nobody uses them

* fix: spacing, border

* fix: org edit button

* fix: remove invites tab

* fix: factor out columns

* login screen needs to clear any current user query errors before redirecting

otherwise it keeps trying to redirect you back to the login screen until the latest metadata request succeeds

* Fix some issues during login when switching between users

- we need to invalidate all the previous user's information when they log out so that it's not floating around in memory
- we need to validate the tenant id stored in localstorage before passing it along to the rest of the world
- to be safe, invalidate the user universe and start refetching it as soon as they log in, before we redirect to the authenticated route

* One more bit of explicit fetching that needs to happen

* After accepting a tenant invite, need to invalidate the user universe and re-fetch

Also, kill the listTenantMemberships query to force people to use the query that's wrapped up by the user universe

* Change the tenant-create e2e testto account for the UI changes

* It seems like Cypress was trying to navigate to the onboarding page too quickly for the test to pass on my machine

* Slightly better copy for the tenant label

* fix lint issues

* quote the glob paths?

* Disable running the tests in CI for now

* use the width settings that are most common across dialogs right now

* node 20 -> 24

* Need to manually specify node version apparently?

---------

Co-authored-by: mrkaye97 <mrkaye97@gmail.com>
2026-02-27 13:14:00 -05:00
2025-12-26 11:30:35 -05:00
2026-02-18 07:54:04 -08:00
2026-02-26 12:57:33 -05:00
2025-10-20 09:09:49 -04:00
2023-12-15 13:08:04 -05:00

Hatchet Logo

Run Background Tasks at Scale

Docs License: MIT Go Reference NPM Downloads

Discord Twitter GitHub Repo stars

Hatchet Cloud · Documentation · Website · Issues

What is Hatchet?

Hatchet is a platform for running background tasks and durable workflows, built on top of Postgres. It bundles a durable task queue, observability, alerting, a dashboard, and a CLI into a single platform.

Get started quickly

The fastest way to get started with a running Hatchet instance is to install the Hatchet CLI (on MacOS, Linux or WSL) - note that this requires Docker installed locally to work:

curl -fsSL https://install.hatchet.run/install.sh | bash
hatchet --version
hatchet server start

You can also sign up on Hatchet Cloud to try it out! We recommend this even if you plan on self-hosting, so you can have a look at what a fully-deployed Hatchet platform looks like.

To view full documentation for self-hosting and using cloud, have a look at the docs.

When should I use Hatchet?

Background tasks are critical for offloading work from your main web application. Usually background tasks are sent through a FIFO (first-in-first-out) queue, which helps guard against traffic spikes (queues can absorb a lot of load) and ensures that tasks are retried when your task handlers error out. Most stacks begin with a library-based queue backed by Redis or RabbitMQ (like Celery or BullMQ). But as your tasks become more complex, these queues become difficult to debug, monitor and start to fail in unexpected ways.

This is where Hatchet comes in. Hatchet is a full-featured background task management platform, with built-in support for chaining complex tasks together into workflows, alerting on failures, making tasks more durable, and viewing tasks in a real-time web dashboard.

Features

📥 Queues

Hatchet is built on a durable task queue that enqueues your tasks and sends them to your workers at a rate that your workers can handle. Hatchet will track the progress of your task and ensure that the work gets completed (or you get alerted), even if your application crashes.

This is particularly useful for:

  • Ensuring that you never drop a user request
  • Flattening large spikes in your application
  • Breaking large, complex logic into smaller, reusable tasks

Read more ➶

  • Python
    # 1. Define your task input
    class SimpleInput(BaseModel):
        message: str
    
    # 2. Define your task using hatchet.task
    @hatchet.task(name="SimpleWorkflow", input_validator=SimpleInput)
    def simple(input: SimpleInput, ctx: Context) -> dict[str, str]:
        return {
          "transformed_message": input.message.lower(),
        }
    
    # 3. Register your task on your worker
    worker = hatchet.worker("test-worker", workflows=[simple])
    worker.start()
    
    # 4. Invoke tasks from your application
    simple.run(SimpleInput(message="Hello World!"))
    
  • Typescript
    // 1. Define your task input
    export type SimpleInput = {
      Message: string;
    };
    
    // 2. Define your task using hatchet.task
    export const simple = hatchet.task({
      name: "simple",
      fn: (input: SimpleInput) => {
        return {
          TransformedMessage: input.Message.toLowerCase(),
        };
      },
    });
    
    // 3. Register your task on your worker
    const worker = await hatchet.worker("simple-worker", {
      workflows: [simple],
    });
    
    await worker.start();
    
    // 4. Invoke tasks from your application
    await simple.run({
      Message: "Hello World!",
    });
    
  • Go
    // 1. Define your task input
    type SimpleInput struct {
      Message string `json:"message"`
    }
    
    // 2. Define your task using factory.NewTask
    simple := factory.NewTask(
      create.StandaloneTask{
        Name: "simple-task",
      }, func(ctx worker.HatchetContext, input SimpleInput) (*SimpleResult, error) {
        return &SimpleResult{
          TransformedMessage: strings.ToLower(input.Message),
        }, nil
      },
      hatchet,
    )
    
    // 3. Register your task on your worker
    worker, err := hatchet.Worker(v1worker.WorkerOpts{
      Name: "simple-worker",
      Workflows: []workflow.WorkflowBase{
        simple,
      },
    })
    
    worker.StartBlocking()
    
    // 4. Invoke tasks from your application
    simple.Run(context.Background(), SimpleInput{Message: "Hello, World!"})
    
🎻 Task Orchestration

Hatchet allows you to build complex workflows that can be composed of multiple tasks. For example, if you'd like to break a workload into smaller tasks, you can use Hatchet to create a fanout workflow that spawns multiple tasks in parallel.

Hatchet supports the following mechanisms for task orchestration:

  • DAGs (directed acyclic graphs) — pre-define the shape of your work, automatically routing the outputs of a parent task to the input of a child task. Read more ➶

  • Durable tasks — these tasks are responsible for orchestrating other tasks. They store a full history of all spawned tasks, allowing you to cache intermediate results. Read more ➶

  • Python
    # 1. Define a workflow (a workflow is a collection of tasks)
    simple = hatchet.workflow(name="SimpleWorkflow")
    
    # 2. Attach the first task to the workflow
    @simple.task()
    def task_1(input: EmptyModel, ctx: Context) -> dict[str, str]:
        print("executed task_1")
        return {"result": "task_1"}
    
    # 3. Attach the second task to the workflow, which executes after task_1
    @simple.task(parents=[task_1])
    def task_2(input: EmptyModel, ctx: Context) -> None:
        first_result = ctx.task_output(task_1)
        print(first_result)
    
    # 4. Invoke workflows from your application
    result = simple.run(input_data)
    
  • Typescript
    // 1. Define a workflow (a workflow is a collection of tasks)
    const simple = hatchet.workflow<DagInput, DagOutput>({
      name: "simple",
    });
    
    // 2. Attach the first task to the workflow
    const task1 = simple.task({
      name: "task-1",
      fn: (input) => {
        return {
          result: "task-1",
        };
      },
    });
    
    // 3. Attach the second task to the workflow, which executes after task-1
    const task2 = simple.task({
      name: "task-2",
      parents: [task1],
      fn: (input, ctx) => {
        const firstResult = ctx.getParentOutput(task1);
        console.log(firstResult);
      },
    });
    
    // 4. Invoke workflows from your application
    await simple.run({ Message: "Hello World" });
    
  • Go
    // 1. Define a workflow (a workflow is a collection of tasks)
    simple := v1.WorkflowFactory[DagInput, DagOutput](
        workflow.CreateOpts[DagInput]{
            Name: "simple-workflow",
        },
        hatchet,
    )
    
    // 2. Attach the first task to the workflow
    const task1 = simple.Task(
        task.CreateOpts[DagInput]{
            Name: "task-1",
            Fn: func(ctx worker.HatchetContext, _ DagInput) (*SimpleOutput, error) {
                return &SimpleOutput{
                    Result: "task-1",
                }, nil
            },
        },
    );
    
    // 3. Attach the second task to the workflow, which executes after task-1
    const task2 = simple.Task(
        task.CreateOpts[DagInput]{
            Name: "task-2",
            Parents: []task.NamedTask{
                step1,
            },
            Fn: func(ctx worker.HatchetContext, _ DagInput) (*SimpleOutput, error) {
                return &SimpleOutput{
                    Result: "task-2",
                }, nil
            },
        },
    );
    
    // 4. Invoke workflows from your application
    simple.Run(ctx, DagInput{})
    
🚦 Flow Control

Don't let busy users crash your application. With Hatchet, you can throttle execution on a per-user, per-tenant and per-queue basis, increasing system stability and limiting the impact of busy users on the rest of your system.

Hatchet supports the following flow control primitives:

  • Concurrency — set a concurrency limit based on a dynamic concurrency key (e.g., each user can only run 10 batch jobs at a given time). Read more ➶

  • Rate limiting — create both global and dynamic rate limits. Read more ➶

  • Python
    # limit concurrency on a per-user basis
    flow_control_workflow = hatchet.workflow(
      name="FlowControlWorkflow",
      concurrency=ConcurrencyExpression(
        expression="input.user_id",
        max_runs=5,
        limit_strategy=ConcurrencyLimitStrategy.GROUP_ROUND_ROBIN,
      ),
      input_validator=FlowControlInput,
    )
    
    # rate limit a task per user to 10 tasks per minute, with each task consuming 1 unit
    @flow_control_workflow.task(
        rate_limits=[
            RateLimit(
                dynamic_key="input.user_id",
                units=1,
                limit=10,
                duration=RateLimitDuration.MINUTE,
            )
        ]
    )
    def rate_limit_task(input: FlowControlInput, ctx: Context) -> None:
        print("executed rate_limit_task")
    
  • Typescript
    // limit concurrency on a per-user basis
    flowControlWorkflow = hatchet.workflow<SimpleInput, SimpleOutput>({
      name: "ConcurrencyLimitWorkflow",
      concurrency: {
        expression: "input.userId",
        maxRuns: 5,
        limitStrategy: ConcurrencyLimitStrategy.GROUP_ROUND_ROBIN,
      },
    });
    
    // rate limit a task per user to 10 tasks per minute, with each task consuming 1 unit
    flowControlWorkflow.task({
      name: "rate-limit-task",
      rateLimits: [
        {
          dynamicKey: "input.userId",
          units: 1,
          limit: 10,
          duration: RateLimitDuration.MINUTE,
        },
      ],
      fn: async (input) => {
        return {
          Completed: true,
        };
      },
    });
    
  • Go
    // limit concurrency on a per-user basis
    flowControlWorkflow := factory.NewWorkflow[DagInput, DagResult](
      create.WorkflowCreateOpts[DagInput]{
        Name: "simple-dag",
        Concurrency: []*types.Concurrency{
          {
            Expression:    "input.userId",
            MaxRuns:       1,
            LimitStrategy: types.GroupRoundRobin,
          },
        },
      },
      hatchet,
    )
    
    // rate limit a task per user to 10 tasks per minute, with each task consuming 1 unit
    flowControlWorkflow.Task(
      create.WorkflowTask[FlowControlInput, FlowControlOutput]{
        Name: "rate-limit-task",
        RateLimits: []*types.RateLimit{
          {
            Key:            "user-rate-limit",
            KeyExpr:        "input.userId",
            Units:          1,
            LimitValueExpr: 10,
            Duration:       types.Minute,
          },
        },
      }, func(ctx worker.HatchetContext, input FlowControlInput) (interface{}, error) {
        return &SimpleOutput{
          Step: 1,
        }, nil
      },
    )
    
📅 Scheduling

Hatchet has full support for scheduling features, including cron, one-time scheduling, and pausing execution for a time duration. This is particularly useful for:

  • Cron schedules run data pipelines, batch processes, or notification systems on a cron schedule Read more ➶

  • One-time tasks schedule a workflow for a specific time in the future Read more ➶

  • Durable sleep pause execution of a task for a specific duration Read more ➶

  • Python
    tomorrow = datetime.today() + timedelta(days=1)
    
    # schedule a task to run tomorrow
    scheduled = simple.schedule(
      tomorrow,
      SimpleInput(message="Hello, World!")
    )
    
    # schedule a task to run every day at midnight
    cron = simple.cron(
      "every-day",
      "0 0 * * *",
      SimpleInput(message="Hello, World!")
    )
    
  • Typescript
    const tomorrow = new Date(Date.now() + 1000 * 60 * 60 * 24);
    // schedule a task to run tomorrow
    const scheduled = simple.schedule(tomorrow, {
      Message: "Hello, World!",
    });
    
    // schedule a task to run every day at midnight
    const cron = simple.cron("every-day", "0 0 * * *", {
      Message: "Hello, World!",
    });
    
  • Go
    const tomorrow = time.Now().Add(24 * time.Hour);
    
    // schedule a task to run tomorrow
    simple.Schedule(ctx, tomorrow, ScheduleInput{
      Message: "Hello, World!",
    })
    
    // schedule a task to run every day at midnight
    simple.Cron(ctx, "every-day", "0 0 * * *", CronInput{
      Message: "Hello, World!",
    })
    
🚏 Task routing

While the default Hatchet behavior is to implement a FIFO queue, it also supports additional scheduling mechanisms to route your tasks to the ideal worker.

  • Sticky assignment — allows spawned tasks to prefer or require execution on the same worker. Read more ➶

  • Worker affinity — ranks workers to discover which is best suited to handle a given task. Read more ➶

  • Python
    # create a workflow which prefers to run on the same worker, but can be
    # scheduled on any worker if the original worker is busy
    hatchet.workflow(
      name="StickyWorkflow",
      sticky=StickyStrategy.SOFT,
    )
    
    # create a workflow which must run on the same worker
    hatchet.workflow(
      name="StickyWorkflow",
      sticky=StickyStrategy.HARD,
    )
    
  • Typescript
    // create a workflow which prefers to run on the same worker, but can be
    // scheduled on any worker if the original worker is busy
    hatchet.workflow({
      name: "StickyWorkflow",
      sticky: StickyStrategy.SOFT,
    });
    
    // create a workflow which must run on the same worker
    hatchet.workflow({
      name: "StickyWorkflow",
      sticky: StickyStrategy.HARD,
    });
    
  • Go
    // create a workflow which prefers to run on the same worker, but can be
    // scheduled on any worker if the original worker is busy
    factory.NewWorkflow[StickyInput, StickyOutput](
      create.WorkflowCreateOpts[StickyInput]{
        Name: "sticky-dag",
        StickyStrategy: types.StickyStrategy_SOFT,
      },
      hatchet,
    );
    
    // create a workflow which must run on the same worker
    factory.NewWorkflow[StickyInput, StickyOutput](
      create.WorkflowCreateOpts[StickyInput]{
        Name: "sticky-dag",
        StickyStrategy: types.StickyStrategy_HARD,
      },
      hatchet,
    );
    
Event triggers and listeners

Hatchet supports event-based architectures where tasks and workflows can pause execution while waiting for a specific external event. It supports the following features:

  • Event listening — tasks can be paused until a specific event is triggered. Read more ➶

  • Event triggering — events can trigger new workflows or steps in a workflow. Read more ➶

  • Python
    # Create a task which waits for an external user event or sleeps for 10 seconds
    @dag_with_conditions.task(
      parents=[first_task],
      wait_for=[
        or_(
          SleepCondition(timedelta(seconds=10)),
          UserEventCondition(event_key="user:event"),
        )
      ]
    )
    def second_task(input: EmptyModel, ctx: Context) -> dict[str, str]:
        return {"completed": "true"}
    
  • Typescript
    // Create a task which waits for an external user event or sleeps for 10 seconds
    dagWithConditions.task({
      name: "secondTask",
      parents: [firstTask],
      waitFor: Or({ eventKey: "user:event" }, { sleepFor: "10s" }),
      fn: async (_, ctx) => {
        return {
          Completed: true,
        };
      },
    });
    
  • Go
    // Create a task which waits for an external user event or sleeps for 10 seconds
    simple.Task(
      conditionOpts{
        Name: "Step2",
        Parents: []create.NamedTask{
          step1,
        },
        WaitFor: condition.Conditions(
          condition.UserEventCondition("user:event", "'true'"),
          condition.SleepCondition(10 * time.Second),
        ),
      }, func(ctx worker.HatchetContext, input DagWithConditionsInput) (interface{}, error) {
        // ...
      },
    );
    
🖥️ Real-time Web UI

Hatchet comes bundled with a number of features to help you monitor your tasks, workflows, and queues.

Real-time dashboards and metrics

Monitor your tasks, workflows, and queues with live updates to quickly detect issues. Alerting is built in so you can respond to problems as soon as they occur.

https://github.com/user-attachments/assets/b1797540-c9da-4057-b50f-4780f52a2cb9

Logging

Hatchet supports logging from your tasks, allowing you to easily correlate task failures with logs in your system. No more digging through your logging service to figure out why your tasks failed.

https://github.com/user-attachments/assets/427c15cd-8842-4b54-ab2e-3b1cabc01c7b

Alerting

Hatchet supports Slack and email-based alerting for when your tasks fail. Alerts are real-time with adjustable alerting windows.

Documentation

The most up-to-date documentation can be found at https://docs.hatchet.run.

Community & Support

  • Discord - best for getting in touch with the maintainers and hanging with the community
  • Github Issues - used for filing bug reports
  • Github Discussions - used for starting in-depth technical discussions that are suited for asynchronous communication
  • Email - best for getting Hatchet Cloud support and for help with billing, data deletion, etc.

Hatchet vs...

Hatchet vs Temporal

Hatchet is designed to be a general-purpose task orchestration platform -- it can be used as a queue, a DAG-based orchestrator, a durable execution engine, or all three. As a result, Hatchet covers a wider array of use-cases, like multiple queueing strategies, rate limiting, DAG features, conditional triggering, streaming features, and much more.

Temporal is narrowly focused on durable execution, and supports a wider range of database backends and result stores, like Apache Cassandra, MySQL, PostgreSQL, and SQLite.

When to use Hatchet: when you'd like to get more control over the underlying queue logic, run DAG-based workflows, or want to simplify self-hosting by only running the Hatchet engine and Postgres.

When to use Temporal: when you'd like to use a non-Postgres result store, or your only workload is best suited for durable execution.

Hatchet vs Task Queues (BullMQ, Celery)

Hatchet is a durable task queue, meaning it persists the history of all executions (up to a retention period), which allows for easy monitoring + debugging and powers a bunch of the durability features above. This isnt the standard behavior of Celery and BullMQ (and you need to rely on third-party UI tools which are extremely limited in functionality, like Celery Flower).

When to use Hatchet: when you'd like results to be persisted and observable in a UI

When to use task queue library like BullMQ/Celery: when you need very high throughput (>10k/s) without retention, or when you'd like to use a single library (instead of a standalone service like Hatchet) to interact with your queue.

Hatchet vs DAG-based platforms (Airflow, Prefect, Dagster)

These tools are usually built with data engineers in mind, and arent designed to run as part of a high-volume application. Theyre usually higher latency and higher cost, with their primary selling point being integrations with common datastores and connectors.

When to use Hatchet: when you'd like to use a DAG-based framework, write your own integrations and functions, and require higher throughput (>100/s)

When to use other DAG-based platforms: when you'd like to use other data stores and connectors that work out of the box

Hatchet vs AI Frameworks

Most AI frameworks are built to run in-memory, with horizontal scaling and durability as an afterthought. While you can use an AI framework in conjunction with Hatchet, most of our users discard their AI framework and use Hatchets primitives to build their applications.

When to use Hatchet: when you'd like full control over your underlying functions and LLM calls, or you require high availability and durability for your functions.

When to use an AI framework: when you'd like to get started quickly with simple abstractions.

Issues

Please submit any bugs that you encounter via Github issues.

I'd Like to Contribute

Please let us know what you're interesting in working on in the #contributing channel on Discord. This will help us shape the direction of the project and will make collaboration much easier!

Description
🪓 Run Background Tasks at Scale
Readme MIT 252 MiB
Languages
Go 27.1%
Python 24%
TypeScript 19.3%
Ruby 18.3%
MDX 6.9%
Other 4.3%