Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'll never understand why the HATEOAS meme hasn't died.

Is anyone using it? Anywhere?

What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?



> I'll never understand why the HATEOAS meme hasn't died.

> Is anyone using it? Anywhere?

As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol. If so (a cursory glance at RFC 8555 indicates that it may be), then it’s used by almost everyone who serves HTTPS.

Arguably HTTP, when used as it was intended, is itself a HATEOAS protocol.

> What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?

LLMs seem to do well at this.

And remember that ‘auto-discovery’ means different things. A link typed next enables auto-discovery of the next resource (whatever that means); it assumes some pre-existing knowledge in the client of what ‘next’ actually means.


> As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol.

On this case specifically, everybody's lives are worse because of that.


I'm not super familiar with acme, but why is that? I usually dislike the HATEOS approach but I've never really seen it used seriously, so I'm curious!


Yes. You used it to enter this comment.

I am using it to enter this reply.

The magical client that can make use of an auto-discoverable API is called a "web browser", which you are using right this moment, as we speak.


This is true, but isn’t this quite far away from the normal understanding of API, which is an interface consumed by a program? Isn’t this the P in Application Programming Interface? If it’s a human at the helm, it’s called a User Interface.


I agree that's a common understanding of things, but I don't think that it's 100% accurate. I think that a web browser is a client program, consuming a RESTful application programming interface in the manner that RESTful APIs are designed to be consumed, and presenting the result to a human to choose actions.

I think if you restrict the notion of client to "automated programs that do not have a human driving them" then REST becomes much less useful:

https://htmx.org/essays/hypermedia-clients/

https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...

AI may change this at some point.


If you allow the notion of client to include "web browser driven by humans", then what is it about Fielding's dissertation that is considered so important and original in the first place? Sure it's formal and creates some new and precise terminology, but the concept of browsing was already well established when he wrote it.


It formalized the network architecture of distributed hypermedia systems and described interesting characteristics and tradeoffs of that approach. Whether or not it did a GOOD job of that for the layman I will leave to you, only noting the confusion around the topic found, ironically, across the internet.


At that level, it would be infinitely clearer to say, "There is no such thing as a RESTful API, since the purpose of REST is to connect a system to a human user. There is only such a thing as a RESTful UI based on an underlying protocol (HTML/HTTP). But the implementation of this protocol (the web browser) is secondary to the actual purpose of the system, which is always a UI."


There is such a thing as a RESTful API, and that API must use hypertext, as is clearly laid out in Fielding's dissertation. I don't know what a RESTful UI is, but I do know what a hypertext is, how a server can return a hypertext, how a client can receive that hypertext and present it to a user to select actions from.

Whether or not the API is being consumed by a script client or a browser client doesn't change the RESTful-ness of it, although it does change how useful the aspects of REST (in particular, the uniform interface) will be to that client.


> and that API must use hypertext

I'd say that my web browser is not using hypertext. It is merely transforming it so that I can use the resulting hypermedia, and thereby interface with the remote host. That is, my browser isn't the one that decides how to interface with the remote host; I am. The browser implements the hypertext protocol and presents me a user interface to the remote host.

Fielding might have a peculiar idea of what an "API" is, so that a "human + browser" is a programmatic application, but if that's what he says, then I think his ideas are just dumb and I shouldn't bother listening to him.

> Whether or not the API is being consumed by a script client or a browser client doesn't change the RESTful-ness of it

There's no way for a "script client" to use hypertext without implementing a fixed protocol on top of it, which is allegedly not-RESTful. Unless you count a search engine crawler as such a client, I guess, but that's secondary to the purpose of hypertext.


From wikipedia's article on API[1]:

> An application programming interface (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software.[1] A document or standard that describes how to build such a connection or interface is called an API specification. A computer system that meets this standard is said to implement or expose an API. The term API may refer either to the specification or to the implementation.

The server and browser are two different computer programs. The browser understand how to make an API connection to a remote server and then take an HTML response it receives (if it gets one of that media type) and transform it into a display to present to the user, allowing the user to choose actions found in the HTML. It then understands how to take actions by the user and turn those into further API interactions with the remote system or systems.

Because the browser waits for a human to intervene and make choices (sometimes, consider redirects) doesn't make the overall system any less of a distributed one, with pieces of software integrating via APIs following a specific network architecture, namely what Fielding called REST.

Your intuition that this idea doesn't make a lot of sense for a script-client is correct:

https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...

[1] - https://en.wikipedia.org/wiki/API


More broadly, I dislike the characterization of the web browser as the "client" in this situation. After all, the browser isn't the recipient of the remote host's services: it's just the messenger or agent on behalf of the (typically human) user, who is the real client of the server, and the recipient of the hypermedia it offers via a hypertext protocol.

That is, the browser may be communicating with the remote server (using APIs provided by the local OS), but it is not itself interfacing with the server, i.e., being offered a service for its own benefit. It may possibly be said that the whole system of "user + browser" interfaces with the remote server, but then it is no longer an application.

(Of course, this is all assuming the classical model of HTML web pages presented to the user as-is. With JS, we can have scripts and browser extensions acting for their own purposes, so that they may be rightly considered "client" programs. But none of these are using a REST API in Fielding's sense.)


OK, i understand you dislike it. But by any reasonable standard the web is a client/server distributed system, where the browsers are the clients. I understand you don't feel like that's right, but objectively that's what is going on. The browser is interfacing with the remote server, via an API discovered in the hypertext responses, based on actions taken by the users. It is no different than, for example, a MMORPG connecting to an API based on user actions in the game except that the actions are discovered in the hypertext responses. That's the crux of the uniform interface of REST.

I don't know what "for its own benefit" means.


So, given a hateos api, and stock firefox (or chrome, or safari, or whatever), it will generate client views with crud functionality?

Let alone ux affordances, branding, etc.


Yes. You used such an api to post your reply. And I am using it as well, via the affordances presented by the mobile safari hypermedia client program. Quite an amazing system!


No. I was served HTML. not a json respoise that the browser discovered how to display.


Yes. Exactly.


The connection between the "H" in HTML and the "H" in HATEOAS might help you connect some dots.


html is the hateoas response


The web browser is just following direct commands. The auto discovery and logic is implemented by my human brain



I also use Google Maps, YouTube, Spotify, and Figma in the same web browser. But surely most of the functionality of those would not be considered HATEOAS.


Yes, very strongly agree. Browsers, through the code-on-demand "optional" constraint on REST, have become so powerful that people have started to build RPC-style applications in them.

Ironic that Fielding's dissertation contained the seed of REST's destruction!


Wait what? So everything is already HATEOAS?

I thought the “problem” was that no one was building proper restful / HATEOAS APIs.

It can’t go both ways.


The web, in traditional HTML-based responses, uses HATEOAS, almost by definition. JSON APIs rarely do, and when they do it's largely pointless.

https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...

https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...


I used it on an enterprise-grade video surveillance system. It was great - basically solved the versioning and permissions problem at the API level. We leveraged other RFCs where applicable.

The biggest issue was that people wanted to subvert the model to "make things easier" in ways that actually made things harder. The second biggest issue is that JSON is not, out of the box, a hypertext format. This makes application/json not suitable for HATEOAS, and forcing some hypertext semantics onto it always felt like a kludge.


https://htmx.org/ might be the closest attempt?


https://data-star.dev are taking things a bit further in terms of simplicity and performance and hypermedia concepts. Worth a look.


I think OData isn't used, and that's a proper standard and a lower bar to clear. HATEOAS isn't even benefiting from a popular standard, which is both a cause and a result.


You realize that anyone using a browser to view HTML is using HATEOS, right? You could probably argue whether SPAs fit the bill, but for sure any server rendered or static site is using HATEOS.

The point isn't that clients must have absolutely no prior knowledge of the server, its that clients shouldn't have to have complete knowledge of the server.

We've grown used to that approach because most of us have been building tightly coupled apps where the frontend knows exactly how the backend works, but that isn't the only way to build a website or web app.


HATEOAS is anything that serves the talking point now apparently


For a traditional web application, HATEOS is that. HTML as the engine of application state: the application state is whatever the server returns, and we can assess the application state at any time by using our eyeballs to view the HTML. For these applications, HTML is not just a presentation layer, it is the data.

The application is then auto-discoverable. We have links to new endpoints, URLs, that progress or modify the application state. Humans can navigate these, yes, but other programs, like crawlers, can as well.


What do you mean? Both HATEOAS and REST have clear definitions.


Can you be more specific? What exactly is the partial knowledge? And how is that different from non-conforming APIs?


Not totally sure I understand your question, sorry if I don't quite answer it here.

With REST you need to know a few things like how to find and parse the initial content. I need a browser that can go from a URL to rendered HTML, for example. I don't need to know anything about what content is available beyond that though, the HTML defines what actions I can take and what other pages I can visit.

RPC APIs are the opposite. I still need to know how to find and parse the response, but I need to deeply understand how those APIs are structured and what I can do. I need to know schemas for the API responses, I need to know what other APIs are available, I need to know how those APIs relate and how to handle errors, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: