Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Another reason to use GraphQL. No more endless discussion about which HTTP codes to use, which HTTP action to use, how to send in data, how to format your response, ...


How does GraphQL allow you to not worry about those things?


GraphQL only uses two HTTP methods (GET and POST). But they actually don't differentiate in their function: you can do any type of reading/writing kind of queries in both GET and POST requests[1]. POST requests are used because they allow for larger bodies.

GraphQL defines the format of the reponse in case of errors[1]

GraphQL doesn't use HTTP status codes to communicate out-of-the-ordinary conditions. You can expect to always get HTTP status 200[1]

The data response is a mirror of the query you sent in with the data present[2]

How to query for data is explicitely laid out[3]

How to send in parameters is explicitely laid out[4]

[1] https://graphql.org/learn/serving-over-http/ [2] https://graphql.org/learn/ [3] https://graphql.org/learn/queries/#fields [4] https://graphql.org/learn/queries/#variables


AFAIK graphql does have response error codes. You should always have response error codes no matter the form of api. They are semantic info.


GraphQL does not have response error codes. It only dictates that any response that has errors, should have an "error" field. It is not defined what this error field should contain, it can be a string or an object with a code and text, etc...


I could be wrong. Some of the graphql servers I have interacted with have given me useful error codes. E.g GitHub.

We do have nice wrappers around the request to raise a proper named exception regardless so it doesn’t matter.


And when intermediate proxies happily retry your small “delete the most recent record” request because it was a GET which is, in the rest of reality, guaranteed not to alter state on the receiving server except in the corner of the internet you’ve defined as your own it’s destructive?


You can use GET for read queries and POST for write queries.

Edit: actually you made me think a little bit more about this, if you can make your mutations idempotent, spurious retried POST requests shouldn't be a problem at all. However "delete the last record" is not an idempotent operation by definition but also one you wouldn't use in the real world - usually you delete by ID.

Edit 2: it's easy to make the server reject mutations sent via GET.


RPC over HTTP pretty much always goes through POST, sometimes with the option of using GET for querying as an optimisation.

For graphql specifically, if you allow GET-ing the GraphQL endpoint (which usually isn't the case by default), it's trivial to ensure only queries go through that method.


And adds a bunch of other endless discussions: how to cache data (POST is not cacheable), how to auth data (anyone has access to everything), how to...


I don't feel like this is added by GraphQL because you'll have these questions regardless of using GraphQL.

how to cache data (POST is not cacheable) -> You can use GET requests and GET requests are cacheable.

how to auth data (anyone has access to everything) -> Authentication or authorization? What do you mean with anyone has access to everything?

how to... -> yes?


> You can use GET requests and GET requests are cacheable.

GET requests are a crutch added to GraphQL precisely because of limitation of POST requests.

And the backend still has to normalise the GET request, and possibly peek inside it to make sure that it is the same as some previous request.

> how to auth data (anyone has access to everything) -> Authentication or authorization? What do you mean with anyone has access to everything?

Your schema is a single endpoint with all the fields you need exposed. Oh, but a person X with access Y might not have access to fields A, B, C, and D.

Too bad, these fields can appear at any level of the hierarchy in the request, deal with it.

> how to... -> yes?

A GraphQL query is ad-hoc. It can have unbounded complexity and unbounded recursion. Ooops, now you have to build complexity analysers and things to figure out recursion levels.

A GraphQL service usually collects data from several external services and/or a database (or even several databases). But remember, a GraphQL query is both ad-hoc and with potential unbounded complexity. Oh, suddenly we have to think how much data and at what time to we retrieve, how do we get the data without retrieving too much, and without hammering the external services and the database with thousands of extra requests.

That's just from the top of my head.

Ans so you end up with piles of additional solutions of various quality and availability on top of GraphQL servers and clients: caching, persisted queries etc. etc.


> GET requests are a crutch added to GraphQL precisely because of limitation of POST requests.

How are GET requests a crutch? If anything GraphQL is completely agnostic to which HTTP method you use to access it. You don't even have to run GraphQL over HTTP, it can work over MQTT, NATS, telnet...

> And the backend still has to normalise the GET request, and possibly peek inside it to make sure that it is the same as some previous request.

Which is what any caching proxy must do anyway?

> Your schema is a single endpoint with all the fields you need exposed. Oh, but a person X with access Y might not have access to fields A, B, C, and D.

In your GraphQL implementation you can just deny fulfilling requests that contain fields person X doesn't have access to. This problem is not limited to GraphQL, it's a generic authorization problem.

> A GraphQL query is ad-hoc. It can have unbounded complexity and unbounded recursion. Ooops, now you have to build complexity analysers and things to figure out recursion levels.

You don't have to build a complexity analyzer or figure out recursion levels, there are already tools that do that for you. But you can go another way and just create a list of approved queries.

> A GraphQL service usually collects data from several external services and/or a database (or even several databases)

Usually? That's just speculation. And that's entirely on the implementation of that service, it has nothing to do with GraphQL spec/technology itself.


> How are GET requests a crutch?

They were not in the original spec IIRC. URL's are limited in legth (it's not in the spec, but most clients have a limit) etc.

> Which is what any caching proxy must do anyway?

Nope. A caching proxy can benefit from HTTP Cache Headers [1]. But cache headers don't work well with GraphQL's GET requests, and don't work at all with the default, which is POST.

> This problem is not limited to GraphQL, it's a generic authorization problem.

GraphQL makes it significantly more complex though. Because your requests are ad-hoc.

> You don't have to build a complexity analyzer or figure out recursion levels, there are already tools that do that for you.

Indeed. By adding more and more complexity. And no, tools only solve a part of the problem. Simply a dataloader on a server doesn't entirely solve the N+1 problem.

> But you can go another way and just create a list of approved queries.

Turning it into REST with none of the benefits of REST.

> Usually? That's just speculation. And that's entirely on the implementation of that service

It's not speculation. That's the main use case for GraphQL. But even if you just slap it on top of a single database, you still have the problem of ad-hoc queries hammering your database.

[1] https://www.keycdn.com/blog/http-cache-headers


> They were not in the original spec IIRC. URL's are limited in legth (it's not in the spec, but most clients have a limit) etc.

They were not in spec because the spec doesn't say anything over which medium it should be transported. In fact the spec [1] only mentions the word HTTP 5 times: 4 times in example data and one time discussing implementation details when sending data over HTTP. GraphQL can't be faulted for the limits of the transport over which it is used.

> Nope. A caching proxy can benefit from HTTP Cache Headers [1]. But cache headers don't work well with GraphQL's GET requests, and don't work at all with the default, which is POST.

How do cache headers not work well with GraphQL GET requests? That is entirely up to the server that implements the API. If that server doesn't implement caching well, that's not GraphQL's fault.

> It's not speculation. That's the main use case for GraphQL. But even if you just slap it on top of a single database, you still have the problem of ad-hoc queries hammering your database.

The main use case of GraphQL is any two things that want to exchange data with each other. Merging data from multiple data sources as its main use case is simply not true. The ability of GraphQL to merge different data sources is one of its abilities but it's not intrinsic to GraphQL.

> Turning it into REST with none of the benefits of REST.

And what exactly are those benefits? I'm here defending GraphQL yet none of the downsides of REST are being taken into account. GraphQL brings structure where there was none, that alone is a significant reason to choose GraphQL to structure your API.

> N+1 problem

There are tools like Postgraphile that solve this. It converts your GraphQL query into one efficient database query.

> ad-hoc queries hammering your database

And what prevents anyone from hammering a REST API? GraphQL doesn't release the developer from implementing sane constraints - something that has to happen with any API implementation and not specific to GraphQL.

[1] http://spec.graphql.org/June2018/


> They were not in spec because the spec doesn't say anything over which medium

If not the spec, then original documentation. GET is a late add-on.

> How do cache headers not work well with GraphQL GET requests?

In REST:

- a resource is uniquely identified by it's URI

- when the server sends back cache headers, any client in between (any proxies, the browser, any http clients in any programming language etc.) can and will use these cache headers to cache the request

In GraphQL GET:

- http://myapi/graphql?query={user{id,name}} and http://myapi/graphql?query={user{name,id}} are two different requests

- it gets worse for more complex queries, especially if they are dynamically constructed on the client

- each of those is viewed as a separate query with separate caching

- cache normalisation and query normalisation are a thing in the graphql world (and non-existent in REST) because of that.

That's a yet another layer of complexity that you have to deal with

> And what exactly are those benefits? I'm here defending GraphQL yet none of the downsides of REST are being taken into account.

I wish anyone was willing to discuss the downsides of GraphQL. Bashing REST is the norm, but GraphQL is the holy grail that accepts no criticism.

Benefits of REST over GraphQL, off the top of my head:

- it's HTTP, plain and simple. So everything HTTP has to offer is directly available in REST. See this HTTP decision diagram [1]

- caching doesn't require you to normalise and unpack every single request and response just to figure out if something is cached

- You know your requests, so you can provide optimised queries, resolution strategies, necessary calls to external services as required by the call

> There are tools like Postgraphile that solve this. It converts your GraphQL query into one efficient database query.

I'd love to see that proven for any sufficiently complex and large database.

> And what prevents anyone from hammering a REST API?

No ad-hoc queries prevents anyone from hammering a REST API that you can specifically tune to the specific request and data you need.

GraphQL requires significantly more care especially if you're not running it on just one database. And even then, oops, joins: https://qht.co/item?id=25014918

And we're back to requiring the graphql server to be able to limit recursion depth, query complexity, etc. etc.

[1] https://github.com/for-GET/http-decision-diagram/tree/master...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: