docs: typescript sdk docs overhaul (#621)

* docs: typescript sdk docs overhaul

* docs(typescript): fix imports (#626)

* docs: finish with updated typescript api

* docs: better abort controller doc

* docs: type parameters

---------

Co-authored-by: Luca Steeb <contact@luca-steeb.com>
This commit is contained in:
abelanger5
2024-06-25 11:47:20 -04:00
committed by GitHub
parent b6dcb4e7e9
commit ff939149de
79 changed files with 747 additions and 393 deletions

View File

@@ -9,7 +9,7 @@ Hatchet automatically catches and handles uncaught errors that occur during work
Here's an example of how an uncaught error thrown in a step's `run` function is handled:
```typescript
import { Step, Context } from "@hatchet/types";
import { Step, Context } from "@hatchet-dev/typescript-sdk";
const myStep: Step<any, any> = async (context: Context<any>) => {
// Step logic that may throw an error
@@ -34,7 +34,7 @@ In addition to automatic error handling for uncaught errors, Hatchet provides a
Here's an example of how to use `context.log()` to log information during step execution:
```typescript
import { Step, Context } from "@hatchet/types";
import { Step, Context } from "@hatchet-dev/typescript-sdk";
const myStep: Step<any, any> = async (context: Context<any>) => {
// Log information at various points in the step

View File

@@ -27,7 +27,7 @@ To modify the input data for a step, simply edit the JSON representation of the
This feature provides a powerful way to handle complex retry scenarios and to test different input variations without needing to modify your workflow code or redeploy your application. For example, it is common for LLM applications to experiment with prompts or model configuration using this feature.
```typescript
import { Step, Context } from "@hatchet/types";
import { Step, Context } from "@hatchet-dev/typescript-sdk";
interface MyStepInput {
message: string;

View File

@@ -20,7 +20,7 @@ This simple retry mechanism can help to mitigate transient failures, such as net
To enable retries for a step in your workflow, simply add the `retries` property to the step object in your workflow definition:
```typescript
import { CreateStepSchema } from "@hatchet/types";
import { CreateStepSchema } from "@hatchet-dev/typescript-sdk";
const myStep: z.infer<typeof CreateStepSchema> = {
name: "my-step",

View File

@@ -49,7 +49,7 @@ Note that the `result` method is an coroutine that must be awaited. It returns a
## Streaming Results
It is also possible to stream the results of a workflow run as each step is executed. This can be done via the `stream` method on the `workflow_run_ref` object:
It is also possible to stream the results of a workflow run as each step is executed. This can be done via the `stream` method on the `WorkflowRunRef` object:
```py filename="stream_workflow_run.py" copy
from hatchet_sdk import Hatchet, ClientConfig

View File

@@ -1,4 +1,4 @@
## Cron Schedules
# Running Cron Workflows
You can declare a cron schedule by passing `on_crons` to the `hatchet.workflow` decorator. For example, to trigger a workflow every 5 minutes, you can do the following:

View File

@@ -1,9 +1,9 @@
# Worker Configuration
Workers can be created via the `hatchet.worker()` method, after [instantiating a `hatchet` instance](./client). The `hatchet.worker()` method takes the following optional arguments:
Workers can be created via the `hatchet.worker()` method, after [instantiating a `hatchet` instance](./client). The `hatchet.worker()` method takes the following arguments:
- `name` (**required**): The name of the worker. This is used to identify the worker in the Hatchet UI.
- `max_runs`: The maximum number of concurrent runs that the worker can run. If not set, it defaults to `100`. Note that this value is different from the number of concurrent runs per workflow.
- `max_runs`: The maximum number of concurrent step runs that the worker can run. If not set, it defaults to `100`. Note that this value is different from the number of concurrent runs per workflow.
## Registering Workflows
@@ -34,7 +34,7 @@ worker.start()
## Starting a Worker
Workers can be started by calling either `worker.start` or `worker.async_start`. We recommend that `worker.start` is the last call made when starting a worker.
Workers can be started by calling either `worker.start` or `worker.async_start`. We recommend that `worker.start` is the last call made when running a worker.
The `worker.start` method is blocking, while `worker.async_start` can be awaited or started via `asyncio.create_task`.

View File

@@ -31,7 +31,7 @@ You can define the following automatic triggers for workflows:
- `on_events`: Trigger the workflow when a specific event is sent to the Hatchet API. See the documentation for [running workflows via events](./run-workflow-events) for more information.
- `on_crons`: Trigger the workflow on a cron schedule. See the documentation for [running workflows via cron schedules](./run-workflow-cron) for more information.
## Getting Access to the Input Data
## Retrieving Workflow Input Data
You can get access to the workflow's input data, such as the event data or other specified input data, by using the `context.workflow_input()` method on the `context`. For example, given the following event:

View File

@@ -9,8 +9,31 @@
"href": "https://github.com/hatchet-dev/hatchet-typescript-quickstart",
"newWindow": true
},
"creating-a-workflow": "Creating a Workflow",
"creating-a-worker": "Creating a Worker",
"pushing-events": "Pushing Events",
"api": "API"
"--- Configuration": {
"type": "separator",
"title": "Configuration"
},
"client": "Client",
"worker": "Worker",
"workflow": "Workflow",
"--- Running Workflows": {
"type": "separator",
"title": "Running Workflows"
},
"run-workflow-api": "API-Triggered Workflows",
"run-workflow-child": "Child Workflows",
"run-workflow-events": "Event-Triggered Workflows",
"run-workflow-cron": "Cron Workflows",
"run-workflow-schedule": "Scheduled Workflows",
"--- Getting Workflow Results": {
"type": "separator",
"title": "Getting Workflow Results"
},
"get-workflow-results": "Getting Workflow Run Results",
"--- Advanced": {
"type": "separator",
"title": "Advanced"
},
"fairness": "Concurrency and Fairness",
"logging": "Logging"
}

View File

@@ -0,0 +1,30 @@
# Client Configuration
A Hatchet client is initialized via:
```ts
import Hatchet from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
```
You can configure the Hatchet client by setting environment variables or setting the config directly when initializing the client. The following are the most common environment variables to configure and are considered stable:
| Variable | Description | Default |
| ----------------------------- | -------------------------------------------------------------------- | ------- |
| `HATCHET_CLIENT_TOKEN` | The tenant-scoped API token to use. | N/A |
| `HATCHET_CLIENT_TLS_STRATEGY` | The TLS strategy to use. Valid values are `none`, `tls`, and `mtls`. | `tls` |
| `HATCHET_CLIENT_NAMESPACE` | The [namespace](/home/basics/environments) to use. | N/A |
You can also configure the client by overriding the `ClientConfig` argument on the `Hatchet` initializer. For example:
```ts
import Hatchet from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init({
token: "my-token",
namespace: "my-namespace",
});
```
This is most commonly used to set the namespace or pass in a token.

View File

@@ -1,36 +0,0 @@
# Creating a Worker
Workers can be created via the `hatchet.worker()` method, after instantiating a `hatchet` instance.
It will automatically read in any `HATCHET_CLIENT` environment variables, which can be set in the process by using something like `dotenv`.
For example:
```ts
import Hatchet, { Workflow } from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
// workflow code...
async function main() {
const worker = await hatchet.worker("example-worker");
await worker.registerWorkflow(workflow);
worker.start();
}
main();
```
## Options
### Name
The `hatchet.worker()` method takes a simple name parameter which can be used to identify the worker on the Hatchet dashboard.
### Max Runs
The `maxRuns` option can be used to limit the number of runs a worker will process before stopping. This is particularly useful for resource-intensive workers. For example, to limit the worker to only executing 1 run at a time, you can use the following code:
```ts
hatchet.worker("example-worker", 1);
```

View File

@@ -1,327 +0,0 @@
# Creating a Workflow
To create a workflow, simply create a new `Workflow` object.
For example, a simple 2-step workflow would look like:
```ts
import Hatchet, { Workflow } from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
const workflow: Workflow = {
id: "example",
description: "test",
on: {
event: "user:create",
},
steps: [
{
name: "step1",
run: (ctx) => {
console.log("executed step1!");
return { step1: "step1" };
},
},
{
name: "step2",
parents: ["step1"],
run: (ctx) => {
console.log("executed step2!");
return { step2: "step2" };
},
},
],
};
```
You'll notice that the workflow defines a workflow trigger (in this case, `on_events`), and the workflow definition. The workflow definition includes a series of steps which is simply an array of `Step` objects.
Each step has a `run` prop which is a function that with a `context` augment. The `context` argument is a `Context` object, which contains information about the workflow, such as the input data and the output data of previous steps.
To create multi-step workflows, you can use `parents` to define the steps which the current step depends on. In the example, `step2` will not invoke until after `step1` completes.
## Getting Access to the Workflow Input Data
You can get access to the workflow's input data simply by calling `ctx.workflowInput()`.
Here's an example `Step` which accesses the workflow input:
```ts
const stepPrintsInput: Step = {
name: "step2",
parents: ["step1"],
run: (ctx) => {
console.log("executed step2!", ctx.workflowInput("name"));
},
};
```
Given the following event:
```json
{
"name": "John"
}
```
The console will log:
```
executed step2! John
```
## Step Outputs
Step outputs should be a of type `Record<string, any>`, should be `JSON` serializable, and are optional. For example:
```ts
const stepReturnsData: Step = {
name: "step2",
run: (ctx) => {
return { awesome: "data" };
},
};
```
Future steps can access this output through the context (`ctx`) parameter `ctx.stepOutput("<step_name>")`. In this example, a future step could access this data via `context.stepOutput("step2")`:
```ts
const futureStep: Step = {
name: "step3",
run: (ctx) => {
const uppercaseStep2 = ctx.stepOutput("step2")["awesome"].toUpperCase();
return { uppercase: uppercaseStep2 };
},
};
```
Remember, a step that depends on previous step data should include this dependency in the `parents` array.
## Cron Schedules
You can declare a cron schedule by defining `on_crons` in the `Workflow` object. For example, to trigger a workflow every 5 minutes, you can do the following:
```ts
import Hatchet from "@hatchet-dev/typescript-sdk";
import { Workflow } from "@hatchet/workflow";
const hatchet = Hatchet.init();
const workflow: Workflow = {
id: "example",
description: "test",
on: {
cron: "*/5 * * * *",
},
steps: [
{
name: "step1",
run: (input, ctx) => {
console.log("executed step1!");
return { step1: "step1" };
},
},
{
name: "step2",
parents: ["step1"],
run: (input, ctx) => {
console.log("executed step2!", input);
return { step2: "step2" };
},
},
],
};
```
## Concurrency Limits and Fairness
> **Note:** this feature is currently in beta, and currently only supports a concurrency strategy which terminates the oldest running workflow run to make room for the new one. This will be expanded in the future to support other strategies.\*\*
By default, there are no concurrency limits for Hatchet workflows. Workflow runs are immediately executed as soon as they are triggered (by an event, cron, or schedule). However, you can enforce a concurrency limit by adding a `concurrency` configuration to your workflow declaration. This configuration includes a key which takes a function that returns a **concurrency group key**, which is a string that is used to group concurrent executions. **Note that this function should not also be used as a `hatchet.step`.** For example, the following workflow will only allow 5 concurrent executions for any workflow execution of `ConcurrencyDemoWorkflow`, since the key is statically set to `concurrency-key`:
```ts
const workflow: Workflow = {
id: "concurrency-example",
description: "test",
on: {
event: "concurrency:create",
},
concurrency: {
name: "basic-concurrency",
key: (ctx) => "concurrency-key",
},
steps: [
{
name: "step1",
run: async (ctx) => {
const { data } = ctx.workflowInput();
const { signal } = ctx.controller;
if (signal.aborted) throw new Error("step1 was aborted");
console.log("starting step1 and waiting 5 seconds...", data);
await sleep(5000);
if (signal.aborted) throw new Error("step1 was aborted");
// NOTE: the AbortController signal can be passed to many http libraries to cancel active requests
// fetch(url, { signal })
// axios.get(url, { signal })
console.log("executed step1!");
return { step1: `step1 results for ${data}!` };
},
},
{
name: "step2",
parents: ["step1"],
run: (ctx) => {
console.log(
"executed step2 after step1 returned ",
ctx.stepOutput("step1"),
);
return { step2: "step2 results!" };
},
},
],
};
```
The argument `limitStrategy` to the `concurrency` configuration can be set to either `CANCEL_IN_PROGRESS` (the default, documented above), or `GROUP_ROUND_ROBIN`. See documentation for the `GROUP_ROUND_ROBIN` strategy below.
### Cancellation Signalling
When a concurrent workflow is already running, Hatchet will send a cancellation signal to the step via it's context. For now, you must handle this signal to exit the step at a logical point:
```ts
{
"step1",
run: async (ctx) => {
const { data } = ctx.workflowInput();
const { signal } = ctx.controller;
if (signal.aborted) throw new Error("step1 was aborted");
console.log("starting step1 and waiting 5 seconds...", data);
await sleep(5000);
if (signal.aborted) throw new Error("step1 was aborted");
// NOTE: the AbortController signal can be passed to many http libraries to cancel active requests
// fetch(url, { signal })
// axios.get(url, { signal })
console.log("executed step1!");
return { step1: `step1 results for ${data}!` };
},
},
```
### Use-Case: Enforcing Per-User Concurrency Limits
You can use the custom concurrency function to enforce per-user concurrency limits. For example, the following workflow will only allow 1 concurrent execution per user:
```py
const workflow: Workflow = {
id: "concurrency-example",
description: "test",
on: {
event: "concurrency:create",
},
concurrency: {
name: "basic-concurrency",
maxRuns: 1,
key: (ctx) => ctx.workflowInput().userId,
},
// Rest of the workflow configuration
}
```
This same approach can be used for:
- Setting concurrency for a specific user session by `session_id` (i.e. multiple chat messages sent)
- Limiting data or document ingestion by setting an input hash or on-file key.
- Rudimentary fairness rules by limiting groups per tenant to a certain number of concurrent executions.
### Use-Case: Group Round Robin
You can distribute workflows fairly between tenants using the `GROUP_ROUND_ROBIN` option for `limitStrategy`. This will ensure that each distinct group gets a fair share of the concurrency limit. For example, let's say 5 workflows got queued in quick succession for keys `A`, `B`, and `C`:
```txt
A, A, A, A, A, B, B, B, B, B, C, C, C, C, C
```
If there is a maximum of 2 concurrent executions, the execution order will be:
```txt
A, B, C, A, B, C, A, B, C, A, B, C, A, B, C
```
This can be set in the `concurrency` configuration as follows:
```ts
const workflow: Workflow = {
id: 'concurrency-example-rr',
description: 'test',
on: {
event: 'concurrency:create',
},
concurrency: {
name: 'multi-tenant-fairness',
key: (ctx) => ctx.workflowInput().group,
maxRuns: 2,
limitStrategy: ConcurrencyLimitStrategy.GROUP_ROUND_ROBIN,
},
steps: [...],
};
```
## Playground Values
Playground values are a way to override variables within a workflow from the Hatchet UI. For example, you could use this to make a prompt or temperature value for an LLM workflow configurable from the UI. These values can be set via the `ctx.playground` method:
```ts
await worker.registerWorkflow({
id: "playground-demo",
description: "This is a demo of the playground",
steps: [
{
name: "playground",
run: (ctx: Context<any, any>) => {
const prompt = ctx.playground("prompt", "This is an example prompt");
return { step1: playground };
},
},
],
});
```
This will then appear in the Hatchet UI under the `prompt` value.
## Logging
Hatchet comes with a built-in logging view where you can push debug logs from your workflows. To use this, you can use the `ctx.log` method. For example:
```ts
const workflow: Workflow = {
id: "logger-example",
description: "test",
on: {
event: "user:create",
},
steps: [
{
name: "logger-step1",
run: async (ctx) => {
for (let i = 0; i < 1000; i++) {
ctx.log(`log message ${i}`);
}
return { step1: "completed step run" };
},
},
],
};
```
Each step is currently limited to 1000 log lines.

View File

@@ -0,0 +1,112 @@
# Concurrency Limits and Fairness
By default, there are no concurrency limits for Hatchet workflows. Workflow runs are immediately executed as soon as they are triggered (by an event, cron, or schedule). However, you can enforce a concurrency limit by adding a `concurrency` configuration to your workflow declaration. This configuration includes a key which takes a function that returns a **concurrency group key**, which is a string that is used to group concurrent executions. **Note that this function should not also be used as a `hatchet.step`.** For example, the following workflow will only allow 5 concurrent executions for any workflow execution of `ConcurrencyDemoWorkflow`, since the key is statically set to `concurrency-key`:
```ts
const workflow: Workflow = {
id: "concurrency-example",
description: "test",
on: {
event: "concurrency:create",
},
concurrency: {
name: "basic-concurrency",
key: (ctx) => "concurrency-key",
},
steps: [
{
name: "step1",
run: async (ctx) => {
const { data } = ctx.workflowInput();
const { signal } = ctx.controller;
if (signal.aborted) throw new Error("step1 was aborted");
console.log("starting step1 and waiting 5 seconds...", data);
await sleep(5000);
if (signal.aborted) throw new Error("step1 was aborted");
// NOTE: the AbortController signal can be passed to many http libraries to cancel active requests
// fetch(url, { signal })
// axios.get(url, { signal })
console.log("executed step1!");
return { step1: `step1 results for ${data}!` };
},
},
{
name: "step2",
parents: ["step1"],
run: (ctx) => {
console.log(
"executed step2 after step1 returned ",
ctx.stepOutput("step1"),
);
return { step2: "step2 results!" };
},
},
],
};
```
The argument `limitStrategy` to the `concurrency` configuration can be set to either `CANCEL_IN_PROGRESS` (the default, documented above), or `GROUP_ROUND_ROBIN`. See documentation for the `GROUP_ROUND_ROBIN` strategy below.
### Use-Case: Enforcing Per-User Concurrency Limits
You can use the custom concurrency function to enforce per-user concurrency limits. For example, the following workflow will only allow 1 concurrent execution per user:
```py
const workflow: Workflow = {
id: "concurrency-example",
description: "test",
on: {
event: "concurrency:create",
},
concurrency: {
name: "basic-concurrency",
maxRuns: 1,
key: (ctx) => ctx.workflowInput().userId,
},
// Rest of the workflow configuration
}
```
This same approach can be used for:
- Setting concurrency for a specific user session by `session_id` (i.e. multiple chat messages sent)
- Limiting data or document ingestion by setting an input hash or on-file key.
- Rudimentary fairness rules by limiting groups per tenant to a certain number of concurrent executions.
### Use-Case: Group Round Robin
You can distribute workflows fairly between tenants using the `GROUP_ROUND_ROBIN` option for `limitStrategy`. This will ensure that each distinct group gets a fair share of the concurrency limit. For example, let's say 5 workflows got queued in quick succession for keys `A`, `B`, and `C`:
```txt
A, A, A, A, A, B, B, B, B, B, C, C, C, C, C
```
If there is a maximum of 2 concurrent executions, the execution order will be:
```txt
A, B, C, A, B, C, A, B, C, A, B, C, A, B, C
```
This can be set in the `concurrency` configuration as follows:
```ts
const workflow: Workflow = {
id: 'concurrency-example-rr',
description: 'test',
on: {
event: 'concurrency:create',
},
concurrency: {
name: 'multi-tenant-fairness',
key: (ctx) => ctx.workflowInput().group,
maxRuns: 2,
limitStrategy: ConcurrencyLimitStrategy.GROUP_ROUND_ROBIN,
},
steps: [...],
};
```

View File

@@ -0,0 +1,70 @@
import { Callout } from "nextra/components";
# Getting Workflow Run Results
It is possible to wait for or stream the results of a workflow run by getting a `WorkflowRunRef`. This is the return value of the `runWorkflow` and `getWorkflowRun` methods on the `hatchet.admin` client, or the `spawnWorkflow` method on a `Context` object. For example:
```ts filename="get-workflow-run.ts" copy
import Hatchet from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
const workflowRun = hatchet.admin.getWorkflowRun(
"5a3a617d-1200-4ee2-92e6-be4bd27ca26f",
);
const result = await workflowRun.result();
console.log("workflow run result:", result);
```
This method takes the `workflow_run_id` as a parameter and returns a reference to the workflow run.
<Callout type="info" emoji="🪓">
If you need to get the workflow run id from a different method than where it was invoked, you can store the value of the `getWorkflowRunId` method of the return value of [`getWorkflow`](./run-workflow-api) or [`spawnWorkflow`](./run-workflow-child). For example:
```ts
const workflowRun = hatchet.admin.runWorkflow("ManualTriggerWorkflow", {
test: "test",
});
const workflowRunId = await workflowRun.getWorkflowRunId();
console.log(`spawned workflow run: ${workflowRunId}`);
```
</Callout>
Note that the `result` method must be awaited. It returns a dict of each step run's result in the workflow. For example:
```ts
{
"step1": {
"result1": "success"
},
"step2": {
"result2": "success"
}
}
```
## Streaming Results
It is also possible to stream the results of a workflow run as each step is executed. This can be done via the `stream` method on the `WorkflowRunRef` object:
```ts filename="stream-workflow-run.ts" copy
import Hatchet from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
const workflowRun = hatchet.admin.getWorkflowRun(
"5a3a617d-1200-4ee2-92e6-be4bd27ca26f",
);
const listener = workflowRun.stream();
for await (const event of listener) {
console.log(event.type, event.payload);
}
```
Note that this is an async generator, so you must use `for await` to iterate over the events.

View File

@@ -1,15 +1,109 @@
import { Callout } from "nextra/components";
# Typescript SDK
This is the Hatchet Typescript SDK reference. On this page, we'll get you up and running with a Typescript worker. This guide assumes that you already have a Hatchet engine instance running. If you don't, you can:
- Sign up on [Hatchet Cloud](https://cloud.onhatchet.run)
- [Self-host Hatchet](https://docs.hatchet.run/self-hosting)
<Callout type="info" emoji="🪓">
If you run into any issues, please file an issue on the [Hatchet Typescript
SDK GitHub repository](https://github.com/hatchet-dev/hatchet-typescript).
</Callout>
## Installation
```sh npm2yarn
npm i @hatchet-dev/typescript-sdk
```
## Usage
## Generate a Token
Navigate to your Hatchet dashboard and navigate to your settings tab. You should see a section called "API Keys". Click "Create API Key", input a name for the key and copy the key. Then set the following environment variables:
```sh
HATCHET_CLIENT_TOKEN="<your-api-key>"
```
<Callout type="info" emoji="🪓">
You may need to set additional environment variables depending on your self-hosted configuration. The Hatchet clients default to SSL by default, but to disable this you can set:
```
HATCHET_CLIENT_TLS_STRATEGY=none
```
</Callout>
## Run your first worker
Make sure you've set the `HATCHET_CLIENT_TOKEN` environment variable via `export HATCHET_CLIENT_TOKEN="<your-api-key>"`. Next, copy the following code into a `worker.ts` file:
```typescript filename="worker.ts" copy
import Hatchet, { Workflow } from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
const workflow: Workflow = {
id: "first-typescript-workflow",
description: "This is my first workflow",
on: {
event: "user:create",
},
steps: [
{
name: "step1",
run: async (ctx) => {
console.log(
"starting step1 with the following input",
ctx.workflowInput(),
);
return {
result: "success!",
};
},
},
],
};
const worker = hatchet.worker("my-worker");
await worker.registerWorkflow(workflow);
worker.start();
```
Next, modify your `package.json` to include a script to start:
```json
{
// ...rest of your `package.json`
"scripts": {
// ...existing scripts
"worker": "npx ts-node worker.ts"
}
}
```
Now to start the worker, in a new terminal run:
```sh npm2yarn
npm run worker
```
## Run your first workflow
The worker is now running and listening for steps to execute. You should see your first worker registered in the `Workers` tab of the Hatchet dashboard:
![Quickstart 1](/quickstart-1.png)
You can now trigger your first workflow by navigating to the `Workflows` tab, selecting your workflow, and clicking the top right "Trigger workflow" button:
![Quickstart 2](/quickstart-2.png)
That's it! You've successfully deployed Hatchet and run your first workflow.
## Next Steps
Congratulations on running your first workflow!
To test out some more complicated examples, check out the [Hatchet Typescript Quickstart](https://github.com/hatchet-dev/hatchet-typescript-quickstart).

View File

@@ -0,0 +1,27 @@
# Logging
Hatchet comes with a built-in logging view where you can push debug logs from your workflows. To use this, you can use the `ctx.log` method. For example:
```ts
const workflow: Workflow = {
id: "logger-example",
description: "test",
on: {
event: "user:create",
},
steps: [
{
name: "logger-step1",
run: async (ctx) => {
for (let i = 0; i < 1000; i++) {
ctx.log(`log message ${i}`);
}
return { step1: "completed step run" };
},
},
],
};
```
Each step is currently limited to 1000 log lines.

View File

@@ -1,15 +0,0 @@
# Pushing Events
Events can be pushed via the client's `hatchet.event.push()` method:
```ts
import Hatchet from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
hatchet.event.push("user:create", {
test: "test",
});
```
Events should be JSON serializable and typically utilize a `Record` or a type that serializes into a JSON object in TypeScript.

View File

@@ -0,0 +1,43 @@
# Running Workflows via API
Workflows can be triggered from the API by calling `runWorkflow`. This method is available on the `hatchet.admin` client:
```ts filename="run-workflow.ts" copy
import Hatchet from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
const workflowRun = hatchet.admin.runWorkflow(
"api-trigger-workflow",
{
test: "test",
},
{
additionalMetadata: {
hello: "moon",
},
},
);
```
## Usage
### Type Parameters
- `Q`: The type of the input data for the workflow. Default is `JsonValue`.
- `P`: The type of the output data from the workflow. Default is `JsonValue`.
### Parameters
- `workflowName` (`string`): The name of the workflow to be spawned. This will be concatenated with the client's namespace to form the full workflow name.
- `input` (`Q`): The input data for the workflow. The type of this data is specified by the generic type parameter `Q`.
- `options` (**optional**): Additional options to pass to the workflow. The current options are supported:
- `additionalMetadata`: A dict of key-value strings to attach to the workflow run. This metadata will be shown in the Hatchet UI and will be available in API endpoints for listing/filtering.
### Returns
- [`WorkflowRunRef`](./get-workflow-results): A reference to the workflow run, with the output data type specified by the generic type parameter `P`.
### Exceptions
- `HatchetError`: Thrown if there is any error during the workflow spawning process, with the error message provided.

View File

@@ -0,0 +1,61 @@
# Running Child Workflows
Hatchet supports running child workflows from within a parent workflow. This allows you to create complex, dynamic workflows that don't map to the concept of a DAG.
To run a child workflow, you can use the `context.spawnWorkflow` method. For example:
```ts filename="child-workflow.ts" copy
const parentWorkflow: Workflow = {
id: "parent-workflow",
description: "Example workflow for spawning child workflows",
on: {
event: "fanout:create",
},
steps: [
{
name: "parent-spawn",
timeout: "10s",
run: async (ctx) => {
const promises: Promise<string>[] = [];
for (let i = 0; i < 5; i++) {
promises.push(
ctx
.spawnWorkflow("child-workflow", {
input: `child-input-${i}`,
})
.result(),
);
}
const results = await Promise.all(promises);
return {
results,
};
},
},
],
};
```
## Usage
### Type Parameters
- `Q`: The type of the input data for the workflow. Default is `JsonValue`.
- `P`: The type of the output data from the workflow. Default is `JsonValue`.
### Parameters
- `workflowName` (`string`): The name of the workflow to be spawned. This will be concatenated with the client's namespace to form the full workflow name.
- `input` (`Q`): The input data for the workflow. The type of this data is specified by the generic type parameter `Q`.
- `key` (`string`, optional): A caching key for the child workflow. If this is not set, the child workflow will be cached on the index that it was triggered at. The cache is used on retries of the parent workflow so that child workflows which were already triggered are skipped.
### Returns
- [`WorkflowRunRef`](./get-workflow-results): A reference to the workflow run, with the output data type specified by the generic type parameter `P`.
### Exceptions
- `HatchetError`: Thrown if there is any error during the workflow spawning process, with the error message provided.

View File

@@ -0,0 +1,26 @@
# Running Cron Workflows
You can declare a cron schedule by defining `cron` in the `Workflow.on` block. For example, to trigger a workflow every 5 minutes, you can do the following:
```ts
import Hatchet, { Workflow } from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
const workflow: Workflow = {
id: "example",
description: "test",
on: {
cron: "*/5 * * * *",
},
steps: [
{
name: "step1",
run: (input, ctx) => {
console.log("executed step1!");
return { step1: "step1" };
},
},
],
};
```

View File

@@ -0,0 +1,15 @@
# Running Workflows via Events
For workflows with event triggers, you can push events to the Hatchet API with the `hatchet.event.push` method:
```ts
import Hatchet from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
hatchet.event.push("user:create", {
test: "test",
});
```
The event's input data will be passed to the workflow run as the input, and is retrievable via the `context.workflow_input()` method.

View File

@@ -0,0 +1,22 @@
# Running Scheduled Workflows
Workflows can be scheduled from the API to run at some future time by calling `scheduleWorkflow`. This method is available on the `hatchet.admin` client:
```ts
import Hatchet from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
const now = new Date();
hatchet.admin.scheduleWorkflow("workflowName", {
schedules: [now],
});
```
This method takes the following parameters:
- `workflowName` (**required**): The name of the workflow to schedule.
- `options` (**optional**): an object with the following properties:
- `schedules` (**optional**): An array of `Date` objects representing the times at which the workflow should be scheduled to run.
- `input` (**optional**): The input to the workflow. This should be a JSON-serializable dict.

View File

@@ -0,0 +1,79 @@
# Worker Configuration
Workers can be created via the `hatchet.worker()` method, after [instantiating a `hatchet` instance](./client). The `hatchet.worker()` method takes the following arguments:
- `name` (**required**): The name of the worker. This is used to identify the worker in the Hatchet UI.
- `maxRuns`: The maximum number of concurrent step runs that the worker can run. If not set, it defaults to `100`. Note that this value is different from the number of concurrent runs per workflow.
For example:
```ts
hatchet.worker("example-worker", 1); // this worker can run only 1 step at a time
```
## Registering Workflows
Workers can register workflows by calling the `worker.registerWorkflow` method with a workflow class instance. There is no limit to the number of workflows which can be registered for each worker. For example:
```ts
import Hatchet, { Workflow } from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
const workflow1: Workflow = {
id: "workflow-1",
description: "Example workflow 1",
steps: [
{
name: "step-1-workflow-1",
timeout: "10s",
run: async (ctx) => {
console.log("step-1-workflow-1");
return {
"step-1-output": "results",
};
},
},
],
};
const workflow2: Workflow = {
id: "workflow-2",
description: "Example workflow 2",
steps: [
{
name: "step-1-workflow-2",
timeout: "10s",
run: async (ctx) => {
console.log("step-1-workflow-2");
return {
"step-1-output": "results",
};
},
},
],
};
async function main() {
const worker = await hatchet.worker("example-worker");
await worker.registerWorkflow(workflow1);
await worker.registerWorkflow(workflow2);
worker.start();
}
main();
```
## Starting a Worker
Workers can be started by calling `worker.start`. We recommend that `worker.start` is the last call made when running a worker. For example:
```ts
async function main() {
const worker = await hatchet.worker("example-worker");
await worker.registerWorkflow(workflow1);
worker.start();
}
main();
```

View File

@@ -0,0 +1,130 @@
# Workflow Configuration
To create a workflow, simply create a new `Workflow` object. For example, a simple 2-step workflow would look like:
```ts
import Hatchet, { Workflow } from "@hatchet-dev/typescript-sdk";
const hatchet = Hatchet.init();
const workflow: Workflow = {
id: "example",
description: "test",
on: {
event: "user:create",
},
steps: [
{
name: "step1",
run: (ctx) => {
console.log("executed step1!");
return { step1: "step1" };
},
},
{
name: "step2",
parents: ["step1"],
run: (ctx) => {
console.log("executed step2!");
return { step2: "step2" };
},
},
],
};
```
You'll notice that the workflow defines a workflow trigger (in this case, on the event `user:create`), and the workflow definition. The workflow definition includes a series of steps which is simply an array of `Step` objects.
Each step has a `run` prop which is a function that with a `context` augment. The `context` argument is a `Context` object, which contains information about the workflow, such as the input data and the output data of previous steps.
To create multi-step workflows, you can use `parents` to define the steps which the current step depends on. In the example, `step2` will not invoke until after `step1` completes.
## Retrieving Workflow Input Data
You can get access to the workflow's input data, such as the event data or other specified input data, by using the `ctx.workflowInput()` method on the `context` argument, which is the first argument to the step function. It's also recommended that you typecast this to the `Context<T>` type, where `T` is the type of the input data. For example:
```ts
type MyType = {
name: string;
};
const stepPrintsInput: Step = {
name: "step2",
parents: ["step1"],
run: (ctx: Context<MyType>) => {
console.log("executed step2!", ctx.workflowInput().name);
},
};
```
## Step Outputs
Step outputs should be a of type `Record<string, any>`, should be `JSON` serializable, and are optional. For example:
```ts
const stepReturnsData: Step = {
name: "step2",
run: (ctx) => {
return { awesome: "data" };
},
};
```
Future steps can access this output through the context (`ctx`) parameter `ctx.stepOutput("<step_name>")`. In this example, a future step could access this data via `context.stepOutput("step2")`:
```ts
const futureStep: Step = {
name: "step3",
run: (ctx) => {
const uppercaseStep2 = ctx.stepOutput("step2")["awesome"].toUpperCase();
return { uppercase: uppercaseStep2 };
},
};
```
Remember, a step that depends on previous step data should include this dependency in the `parents` array.
## Timeouts
**The default timeout on Hatchet is 60 seconds per step run**.
You can declare a timeout for a step by passing `timeout` to the `Step` definition. Timeouts are strings in the format of `1h`, `1m`, `1s`, etc. For example, to timeout a step after 5 minutes, you can do the following:
```ts
const stepWithTimeout: Step = {
name: "step2",
run: (ctx) => {
console.log("executed step2!");
return { step2: "step2" };
},
timeout: "5m",
};
```
## Cancellations
When a step is running and needs to be cancelled, Hatchet will send a cancellation signal to the step via it's context. This cancellation signal can be retrieved via `ctx.controller.signal` which returns an [`AbortController`](https://developer.mozilla.org/en-US/docs/Web/API/AbortController), and is used by many HTTP libraries to cancel active requests. You can also case on the `signal.aborted` property to check if the step has been cancelled. For example:
```ts
{
"step1",
run: async (ctx) => {
const { data } = ctx.workflowInput();
const { signal } = ctx.controller;
if (signal.aborted) throw new Error("step1 was aborted");
console.log("starting step1 and waiting 5 seconds...", data);
await sleep(5000);
if (signal.aborted) throw new Error("step1 was aborted");
// NOTE: the AbortController signal can be passed to many http libraries to cancel active requests
// fetch(url, { signal })
// axios.get(url, { signal })
console.log("executed step1!");
return { step1: `step1 results for ${data}!` };
},
},
```