Ask HN: Is OpenAPI enough for LLM-based API integrations?

We've been running into a recurring issue while using LLMs (Cursor, agents, etc.) to build real API integrations.

OpenAPI is excellent at describing request/response shape, but it doesn't capture execution semantics that matter in production, such as: - retries and backoff rules - idempotency guarantees - auth and token refresh behavior - SDK-specific constraints - "what must not be done" during execution

In addition, large OpenAPI specs tend to be hard for LLMs to consume incrementally: they don't chunk cleanly by operation or behavior, and important constraints get lost when context is truncated.

We've open-sourced an experiment called "Wreken spec": a small, explicit file that lives alongside OpenAPI / SDKs and encodes execution rules and constraints in a way that's intentionally: - operation-scoped - chunkable / retrievable independently - designed for machines (LLMs), not humans

This is very early, and we're not confident this is the right abstraction.

We'd love feedback on: - whether this should exist as a separate file at all - if this belongs as OpenAPI extensions instead - whether we're reinventing something that already exists - failure modes or scaling issues we may be overlooking

Spec + examples: https://gitlab.com/swytchcode/wrekenfile

Context / motivation: https://wreken.com

Genuinely looking for critique—happy to be told this is the wrong direction.

1 points | by chilarai 3 hours ago

1 comments

  • verdverm 56 minutes ago
    No body really wants even more files for AI. It's gotten bad enough with all the vendors requiring their name in the file or dir name. It's out of hand and they need to all agree on one

    So asking users to put yet another file in their repo of dubious value is going to be a major hurdle

    Fwiw, my agentic framework turns OpenAPI into a toolset automatically for me, so you are def making something I already have