Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Performance isn't just latency - it's also about the requests and responses per second a service can handle.

GQL notoriously performs weaker to REST in this regard.

>When a user navigates to a page on a web app or deep links into a Single Page Application (SPA) or a particular view in a mobile app, the frontend application needs to call the backend service to fetch the data needed to render the view. With RESTful APIs, it is unlikely that a single call will be able to get all the data. Typically, one call is made, then the frontend code iterates through the results of that call and makes more API calls per result item to get all the data needed.

>For example, I don’t want to navigate through multiple screens to review my travel itinerary; I want to see the summary (including flights, car rental, and hotel reservation) all on one screen before I commit to making the purchase.

I disagree.

Sure, sometimes more than one restful call is made for just one page, but the example given, when architectured properly is just a user foreign key relationship to all its children (reservations flights, rentals, etc), which is just a single sql query.

I don't think the author really understand the flexibility of REST and how much it can emulate what GQL has to offer.

If you want specificity in your query fetching, just add query params or put them in the request body

And if it is the case that you really do need resources from different endpoints, what is preventing you to build that single endpoint in REST?



Isn't a big feature of GQL that it allows you to filter in the query, thus allowing the backend to fetch and render less data?

Also, which properties of GQL make it inherently slower than REST ... ? Both are abstract concepts. (Also I'd argue that GQL is REST.)


> Isn't a big feature of GQL that it allows you to filter in the query, thus allowing the backend to fetch and render less data?

That feature is not exclusive to GQL.


Agree but I believe the distinction is that in GraphQL this is part of specification to address overfetching/underfetching issues.


That is more related to the frontend, the frontend doesn't have to overfetch data.

But the graphql still will fetch that data and just filter what does out, it still has to get that data.

Example is a query like

``` { currentUser { id name todoLists { title items { name } } } } ```

The resolver will likely get the whole user object from the database, then just send name and id. Then when it finished getting the user, it will then query for the todo lists, and then only send the title (even though it got the whole row for each todo list), then after it fetches those lists, it will query for the items. And retrieve the whole rows of each item from the database.

The data the server needed to fetch didn't change, just what the frontend receives. It is still loading and fetching all the data on that query and then graphql filters the results leaving the server.

Also in the above steps, you notice it queries AGAIN after a data set has been retrieve, this causes an N+1 problem.

It is not inherent in the specs or implementation that fixes these. If you want to avoid fetching the whole object you will need custom code, and to avoid N+1 problem, you need batching of data within requests that "caches" or consolidate nested requests like data-loader, and some form of response caching to help with these issues.

Not siding against the tech, just clarifying those cons.


Yes the client queries for only data it needs and server returns only data which client requested.

With this query, { currentUser { id name todoLists { title items { name } } } }

It is up to the server how it is implemented.

- The server can fetch all the data for the user, todolist and items from the database in one go and resolve the client query mentioned above. In this case there will be overfetching from the database if the client only requested user information.

The server can also fetch the data in 3 queries

1> First to fetch the user, lets say with id 1. 2> Then get all the todos for the for user id 1. 3> Then get all the items for all the todos in step 2. Batching/Dataloaders.

All these queries can be executed in parallel on the server side. Does this make the server complex? Yes but there is also benefit to this when the user only request currentUser it does not fetch any todolists or items from the database.


I may be wrong about this, but here's how I see things: I think the two main "variables that depend on" performance are latency and how much computer power you need. And the "variable that depend on" how much computer power you need are server costs, and the architecture of your application.

If we assume that REST serves more requests than GraphQL, but GraphQL is more flexible, that may be a trade off of dev time of the application vs a dev time of the infrastructure. Which would fit with more and more people using orchestration and things like that. That might be a very wrong assumption though.


personally I found gql to have a steeper learning curve in that it adds an abstract layer to the stack. I call it an abstract layer, the author calls it a BFF (backend for the frontend).

As far as dev time excluding onboarding time, lets not forget why gql was created - facebook wanted to separate the data responses to their mobile and web platform

first of all, not all organizations are facebook

secondly, especially for smaller to medium sized startups, not all responses need to be separated, and "shaving" off data for an extra endpoint is not difficult in REST.

and I remember graphql's early website where they made the claim saying that you'll need to have some really huge number of endpoints in order to emulate what graphql has to offer, and since then they have taken that down because of how ridiculous that sounded.

In reality and practice, the number of endpoints that you need to "shave", again, especially for small to medium startups, is slim to none.


> Performance isn't just latency - it's also about the requests and responses per second a service can handle.

Performance is 99% about latency. And, when having this conversation, we are typically talking about I/O-bound latency.

Assume you have a trivial computer system that can only process 1 thing at a time. The number of things you can process per unit time is inversely proportional to the amount of time it takes to process each thing. Adding more threads/cores to the mix does not change this fundamental equation. Many practical business systems have a large component of processing that must occur in a serial fashion.

Think:

Multiple computers: Milliseconds, One computer: Nanoseconds

How much more work can you do if you can finish each unit in 100ns vs 5ms? How would this impact the user experience? Is the reason you cant finish in 100ns because you are doing global illumination on the CPU or because you are waiting for [hipster noSQL solution] to return some packets to your application server?


sure, I can accept that definition of latency. However, the author, when describing latency, only specified the distance traveled to justify a graphql server without implicating anything else.


In this comment you've made a claim that GQL is not only weaker than REST here, but notoriously weaker. The only argument that you seem to have made here is that a REST call "can" do the same type of things that are built into GraphQL.

You haven't supported your position that GraphQL is weaker at all. Everything that I have read, seen and heard is exactly the opposite. GraphQL seems to be the "I will never go back to REST" experience for virtually everybody I know in the field who has used it.

The other issue is that you're fighting the nature of REST itself to make it do more. Sure, you can make any API call take in specific input and send out exactly the output that you need...but REST API guidelines tend to preach exactly the opposite approach. The call should return one thing and if you want more, you need to get it yourself.

I'm open to hearing your perspective on this, but if you're going to make a claim like that you really need to support it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: