UI vs. API. vs. UAI

(joshbeckman.org)

57 points | by bckmn 5 hours ago

8 comments

  • showerst 3 hours ago
    I really vehemently disagree with the 'feedforward, tolerance, feedback' pattern.

    Protocols and standards like HTML built around "be liberal with what you accept" have turned out to be a real nightmare. Best-guessing the intent of your caller is a path to subtle bugs and behavior that's difficult to reason about.

    If the LLM isn't doing a good job calling your api, then make the LLM get smarter or rebuild the api, don't make the API looser.

    • mort96 2 hours ago
      I'm not sure it's possible to have a technology that's user-facing with multiple competing implementations, and not also, in some way, "liberal in what it accepts".

      Back when XHTML was somewhat hype and there were sites which actually used it, I recall being met with a big fat "XML parse error" page on occasion. If XHTML really took off (as in a significant majority of web pages were XHTML), those XML parse error pages would become way more common, simply because developers sometimes write bugs and many websites are server-generated with dynamic content. I'm 100% convinced that some browser would decide to implement special rules in their XML parser to try to recover from errors. And then, that browser would have a significant advantage in the market; users would start to notice, "sites which give me an XML Parse Error in Firefox work well in Chrome, so I'll switch to Chrome". And there you have the exact same problem as HTML, even though the standard itself is strict.

      The magical thing of HTML is that they managed to make a standard, HTML 5, which incorporates most of the special case rules as implemented by browsers. As such, all browsers would be lenient, but they'd all be lenient in the same way. A strict standard which mandates e.g "the document MUST be valid XML" results in implementations which are lenient, but they're lenient in different ways.

      HTML should arguably have been specified to be lenient from the start. Making a lenient standard from scratch is probably easier than trying to standardize commonalities between many differently-lenient implementations of a strict standard like what HTML had to do.

      • chowells 1 hour ago
        Are you aware of HTML 5? Fun fact about it: there's zero leniency in it. Instead, it specifies a precise semantics (in terms of parse tree) for every byte sequence. Your parser either produces correct output or is wrong. This is the logical end point of being lenient in what you accept - eventually you just standardize everything so there is no room for an implementation to differ on.

        The only difference between that and not being lenient in the first place is a whole lot more complex logic in the specification.

        • mort96 1 hour ago
          > Are you aware of HTML 5? Fun fact about it: there's zero leniency in it.

          I think you understand what I mean. Every byte sequence has a

          > The only difference between that and not being lenient in the first place is a whole lot more complex logic in the specification.

          Not being lenient is how HTML started out.

      • com2kid 38 minutes ago
        > I recall being met with a big fat "XML parse error" page on occasion. If XHTML really took off (as in a significant majority of web pages were XHTML), those XML parse error pages would become way more common

        Except JSX is being used now all over the place and JSX is basically the return of XHTML! JSX is an XML schema with inline JavaScript.

        The difference now days is all in the tooling. It is either precompiled (so the devs see the error) or generated on the backend by a proper library and not someone YOLOing PHP to super glue strings together, as per how dynamic pages were generated in the glory days of XHTML.

        We basically got full circle back to XHTML, but with a lot more complications and a worse user experience!

      • lucideer 1 hour ago
        History has gone the way it went & we have HTML now, there's not much point harking back, but I still find it very odd that people today - with the wisdom of foresight - believe that the world opting for HTML & abandoning XHTML was the sensible choice. It seems odd to me that it's not seen as one of those "worse winning out" stories in the history of technology, like betamax.

        The main argument about XHTML not being "lenient" always centred around client UX of error display - Chrome even went on to actually implement a user-friendly partial-parse/partial-render handling of XHTML files that literally solved everyone's complaints via UI design without any spec changes but by this stage it was already too late.

        The whole story of why we went with HTML is somewhat hilarious: 1 guy wrote an ill informed blog post bitching about XHTML, generated a lot of hype, made zero concrete proposals to solve its problems, & then somehow convinced major browser makers (his current & former employers) to form an undemocratic rival group to the W3C, in which he was appointed dictator. An absolutely bizarre story for the ages, I do wish it was documented better but alas most of the resources around it were random dev blogs that link rotted.

        • integralid 17 minutes ago
          >The whole story of

          Is that really the story? I think it was more like "backward compatible solution soon about more pure, theoretically better solution"

          There's enormous non-xhtml legacy than nobody wanted to port. And tooling back in the day didn't make it easy to write correct xhtml.

          Also like it or not, HTML is still written by humans sometimes, and they don't like parser blowing up because of a minor problem. Especially since such problems are often detected late, and a page which displays slightly wrong is much better outcome than the page blowing up.

      • pwdisswordfishz 33 minutes ago
        lol CVE-2020-26870
    • arscan 3 hours ago
      > Protocols and standards like HTML built around "be liberal with what you accept" have turned out to be a real nightmare.

      This feels a bit like the setup to the “But you have heard of me” joke in Pirates of the Caribbean [2003].

      • paulddraper 1 hour ago
        Or "There are only two kinds of languages: the ones people complain about and the ones nobody uses."
  • metayrnc 4 hours ago
    This is already true for just UI vs. API. It’s incredible that we weren’t willing to put the effort into building good APIs, documentation, and code for our fellow programmers, but we are willing to do it for AI.
    • bubblyworld 4 hours ago
      I think this can kinda be explained by the fact that agentic AI more or less has to be given documentation in order to be useful, whereas other humans working with you can just talk to you if they need something. There's a lack of incentive in the human direction (and in a business setting that means priority goes to other stuff, unfortunately).

      In theory AI can talk to you too but with current interfaces that's quite painful (and LLMs are notoriously bad at admitting they need help).

      • zahlman 2 hours ago
        > agentic AI more or less has to be given documentation in order to be useful, whereas other humans working with you can just talk to you if they need something. ... In theory AI can talk to you too but with current interfaces that's quite painful (and LLMs are notoriously bad at admitting they need help).

        Another framing: documentation is talking to the AI, in a world where AI agents won't "admit they need help" but will read documentation. After all, they process documentation fundamentally the same way they process the user's request.

      • freedomben 4 hours ago
        I also think it makes a difference that an AI agent can read the docs very quickly, and don't typically care about formatting and other presentation-level things that humans have to care about, whereas a human isn't going to read it all, and may read very little of it. I've been at places where we invested substantial time documenting things, only to have it be glanced at maybe a couple of times before becoming outdated.

        The idea of writing docs for AI (but not humans) does feel a little reflexively gross, but as Spock would say, it does seem logical

    • righthand 3 hours ago
      We are only willing to have the Llm generate it for AI. Don’t worry people are writing and editing less.

      And all those tenets of building good APIs, documentation, and code are opposite the incentive of building enshittified APIs, documentation, and code.

  • cco 3 hours ago
    We recently released isagent.dev [1] exactly for this reason!

    Internally at Stytch three sets of folks had been working on similar paths here, e.g. device auth for agents, serving a different documentation experience to agents vs human developers etc and we realized it all comes down to a brand new class of users on your properties: agents.

    IsAgent was born because we wanted a quick and easy way to identify whether a user agent on your website was an agent (user permissioned agent, not a "bot" or crawler) or a human, and then give you a super clean <IsAgent /> and <IsHuman /> component to use.

    Super early days on it, happy to hear others are thinking about the same problem/opportunity.

    [1] GitHub here: http://github.com/stytchauth/is-agent

  • throwanem 3 hours ago
    So, this gets to a fundamental or "death of the author" ie philosophical difference in how we define what an API is "for." Do I as its publisher have final say, to the extent of forbidding mechanically permissible uses? Or may I as the audience, whom the publisher exists to serve, exercise the machine to its not intentionally destructive limit, trusting its maker to prevent normal operation causing (even economic) harm?

    The answer of course depends on the context and the circumstance, admitting no general answer for every case though the cognitively self-impoverishing will as ever seek to show otherwise. What is undeniable is that if you didn't specify your reservations API to reject impermissible or blackout dates, sooner or later whether via AI or otherwise you will certainly come to regret that. (Date pickers, after all, being famously among the least bug-prone of UI components...)

  • kordlessagain 2 hours ago
    All you need is AHP: https://ahp.nuts.services
  • kylecazar 3 hours ago
    Separating presentation layer from business logic has always been a best practice
  • jngiam1 4 hours ago
    https://mcpui.dev/ is worth checking out, really nice project; get the tools to bring dynamic ui to the agents.
  • darepublic 4 hours ago
    if you want your app to be automated wouldn't you just publish your api and make that readily available? I understand the need for agentic UI navigation but obviously an api is still easier and less intensive right. The problem is that it isn't always available, and there ui agents can circumvent that. But you want to embrace the automation of your app so.. just work on your API? You can put an invisible node in your UI to tell agents to stop wasting compute and use the api.