mirror of
https://github.com/hatchet-dev/hatchet.git
synced 2026-01-02 06:39:57 -06:00
* feat: initial mkdocs setup * chore: lock * fix: config + start getting docs working * fix: remove lots more redundant :type docs, update config more * feat: split up clients * feat: add pydoclint * fix: rm defaults from docstrings * fix: pydoclint errors * feat: run pydoclint in ci * fix: lint on 3.13 * debug: try explicit config path * fix: ignore venv * feat: index, styling * fix: rm footer * fix: more style tweaks * feat: generated docs * fix: refactor a bit * fix: regen * Revert "fix: regen" This reverts commit 7f66adc77840ad96d0eafe55c8dd467f71eb50fb. * feat: improve prompting * feat: add docs, modify theme config to enable toc for docs * fix: lint * fix: lint * feat: regenerate * feat: bs4 for html parsing * feat: preview correctly * fix: exclude site subdir from all the linters * refactor: break up script into components * feat: remove a bunch more stuff from the html * feat: prettier, enable toc * fix: enable tocs in more places + sort properly * fix: code blocks, ordering * fix: ordering * feat: finish up feature clients * fix: rm unused deps * fix: routing + property tags + sidebar * fix: hatchet client + formatting * fix: allow selecting single set of files * fix: lint * rm: cruft * fix: naming * fix: runs client attrs * fix: rm cruft page * feat: internal linking + top level description * [Python]: Fixing some more issues (#1573) * fix: pass priority through from the task * fix: improve eof handling slightly * chore: version * fix: improve eof handling * fix: send prio from durable * fix: naming * cleanup: use a variable * chore: version * feat: comment explaining page depth thing * chore: bump ver * feat: standalone docs * fix: prompting + heading levels
21 lines
528 B
Python
21 lines
528 B
Python
from openai import AsyncOpenAI
|
|
from pydantic_settings import BaseSettings
|
|
|
|
from docs.generator.prompts import create_prompt_messages
|
|
|
|
|
|
class Settings(BaseSettings):
|
|
openai_api_key: str = "fake-key"
|
|
|
|
|
|
settings = Settings()
|
|
client = AsyncOpenAI(api_key=settings.openai_api_key)
|
|
|
|
|
|
async def parse_markdown(original_markdown: str) -> str | None:
|
|
response = await client.chat.completions.create(
|
|
model="gpt-4o", messages=create_prompt_messages(original_markdown)
|
|
)
|
|
|
|
return response.choices[0].message.content
|