I came here to say the same thing. For those of you who aren't going to bother reading the Rdio API spec (I did, because there's something I want to hack together with it): the whole API is literally just a single URI that accepts a "method" param, where the method is a list of verbs like "get all tracks in this playlist" or "add a friend".
There's a case to be made for relaxing the definition of REST to accept "just a bunch of URIs" APIs that don't follow the discoverability/transparency conventions of REST. But you can't call "here's a list of verbs" a REST interface.
What you'd want to see in a Rdio REST interface would be:
It implies that a resource is different just because you are using a newer version of the API.
My own most recent implementations use Accept: and Content-Type to define version compatibility, which is what content negotiation is supposed to be about.
Versioning in the old web API sense meant changing the way the protocol works, and REST will always be REST and the resources will always be resources.
All that is likely to change between 'API versions' is formats, which you negotiate with headers, and adding new fields, which old clients can either ask not to get through content negotiation or can safely ignore.
APIv2 is not the same resource as APIv1, so why send it to the same location? Similarly, APIv1/user/1 may very well not contain the same information as APIv2/user/1, so are they really the same resource? They likely refer to the same semantic resource, but if user/1 != user/1, they're not the same from a data standpoint, which is what APIs are generally used for.
It is the same resource. User #1 is the resource. The URL refers to the user, or the user's account. Unless the user creates a new account for use with API v2, the URL should be the same. API v1 and v2 may represent the resource differently, but they are the same resource and should have the same URL.
The correct method for selecting a specific representation is to use the Accept header on the request. For example, if I were requesting somedomain.com/Groxx I could specify Accept: image/png if I wanted a picture of you, or Accept: text/x-vcard if I wanted your contact information.
"The Accept request-header can be used to specify certain media types which are acceptable for the response."
Similar to how when you access some linked data URLs you can specify rdf/xml or rdf/n3 in the Accept header to receive the same data in a specific format.
It's not used to specify whether you want a picture of someone or their contact information all from the same URL.
The REST folk disagree with you. If the picture and the contact information are different representations of the same resource (which is not that much of a stretch conceptually), then they absolutely should be reached at the same URL. At that point, the only way you have left to distinguish what you want is the mime type.
Out of curiosity: how do the REST folk feel about differentiating between resources with urls like /user/1.json and /usr/1.html ? That has been pretty popular, from the little I've seen, and has the advantage of being GET-able with a browser, where header modification is not.
You should use content negotiation for this. The client should have an Accept header that specifies what it wants (which could be "application/json" but really should be something that tells you more about the contents than simply how it can be parsed, such as "application/vnd.mything.MyResource+json" which would be documented in your API with what fields it contains and what it means etc.).
You should also question whether you really need multiple representations for each resource -- in many cases, it's more trouble than it's worth to automatically generate, say, both JSON and XML responses, since it just divides your attention to creating sensible serializations for your API.
If you expect to have clients which cannot modify the headers of their requests, you can support fallbacks like specifying the Accept header in a query parameter. Putting it in the actual URL though is doing a disservice to fully compatible HTTP clients, though.
But if JSON is so inherent to the resource that content negotiation doesn't even matter, it can be fine to include that in the URL, as you might include descriptive filename extensions like ".png" or ".wav" in a URL.
That is still a case of using two URIs to refer to the same resource. If you use one URI per resource, URIs (and bookmarks) can be shared between clients dealing with different media types. Suffixed URIs require mangling to request a different media type.
Clearly it is a pragmatic trade-off and is popular (Google and Twitter do it). The correct answer will be arrived at by considering the total architecture of your application.
As soon as you decide to create two URLs such as /user/1.json and /usr/1.html you are creating two distinct resources. The litmus test for determining if two URLs refer to the same resource is you do a GET on both and only one of them returns a 200 the other returns a 303 with the Location header set to the URL that returned a 200.
That's a good reason to differentiate by header (plus no .jpg vs .jpeg shenanigans in the url). Though I can't say I've ever heard of it being used... I'll have to keep it in mind if I'm ever architecting something that anticipates many different kinds of clients in a large API, however.
If version 1 of a resource is not a subset of version 2 of a resource, and they are not accessing the same record, then they should be separate resources
what 'API' is signaling is what the client is capable of understanding, which is what content negotiation headers are for
I think of it no different to the same resources for web browser consumption (for humans) - I don't place new designs of a website into /design1/ /design2/ etc.
Bear in mind the more things like "Well, you just have to properly use the content negotiation headers" you add to your API, the more the "simplicity" argument of REST goes out the window. I'm not sure I've ever seen really correct use of the content-negotiation headers and claiming it's a solution is dubious... sort of like how I'm still waiting to see proper use of XML Namespaces in the wild. They seem relatively simple to me but the evidence suggests strongly otherwise.
That doesn't make REST "bad", but I think it does start to make the quasi-religious aspect of it less obviously correct.
That's part of my reasoning. Setting proper headers is, yes, more proper... and something essentially nobody does, and many simple API libraries don't support without making using them massively more complicated than prefixing all the URLs with "api/2".
It's a correctness vs usability debate. I'd love to go for correct, but it's extremely painful to enforce in your users in practice.
Using content negotiation, I can--if the server supports it--take the same URI (e.g. http://www.jerf.org/) and pass it to an image viewer to see your face, to my address book to fetch your contact info, etc.
Anything else requires the client knowing how to mangle URIs (e.g. adding /avatar.jpg, /contact.vcf).
I know what content negotiation is; I've never seen it done right. The theory is all beautiful but in practice content negotiation never seems to work right because you never get two people to agree very well on what things are. Another manifestation of the usual semantic classification problem.
It has a dubious track record in X as well. X has been able to do content negotiation for decades for the clipboard but even now it's still spotty and buggy, and that's with everything actually running on the same system. It looks easy. Heck, it looks very easy. It isn't. I can't even tell you exactly why it isn't as easy as it looks (beyond the semantic thing I mentioned above), but the state of the world is pretty clear.
@awj: if your API does HATEOAS properly, so that your responses inline URLs to other related resources (aiding discoverability of your API through hypertext), then the client can naively take the URLs given to it and cleanly apply Accept headers to them as appropriate.
if you don't want to use content negotiation, then stuff it into query parameters - that is what most API's do (and most that support full content negotiation do anyway)
eg.?v=1&format=xml
but interesting to note is that why is encoding easy to do as part of content negotiation, yet everything else is hard? you never see ?e=UTF-8 or &c=gzip
Many APIs simply specify that the encoding is constant (always UTF-8) so this is not an issue. And many HTTP libraries simply handle the content encoding transparently.
The challenge is some HTTP library APIs commonly used for developing clients offer the following interface:
- GET $URL
If you are lucky, then it might also support POST and offer a second parameter for the request body. If are really lucky, there is a way to specify arbitrary methods, but this is fairly rare.
I'm all in favor of supporting fallbacks for more limited clients. But query params are not a great idea since they can prevent caching from happening.
My point was that it's not a good thing for requests for cacheable content, that might not be cached since you used a query param to access it. Some caches ignore requests with query params since "queries" are often not static.
I completely agree from a pure REST perspective, but if you want developers to be able to quickly get up to speed with your API, requiring them to know the finer points of HTTP and how to manipulate headers within their clients is asking a lot.
Is it though? They are developers, developing applications for the web. I don't think its a huge barrier to expect people to learn the basic protocols, and tools like curl reasonably well.
Do you have full control over the clients which make requests (e.g. it's only AJAX or 3rd party clients written to your api spec - i.e. not browsers making raw requests against some of your urls?)
I backed away from using content-negotiation for an API, part of which would have served data and part content.
e.g. to access file metadata, get /path/to/file with Accept: api/v1 and do a 'normal' GET to get the file contents.
I came across some suggestions that some browsers (recent IE I think, unsure) which suggested that cacheing was broken wrt Accept header - which made me worry that people hitting accessing the resource might get the wrong version.
I don't know if your api is vulnerable to this, if you've come across the problem or have any deployment wisdom to share.
The problem is the data model may change between versions so the URL hierarchy will change. Or instance you may have split customer and staff into separate concepts but later decide your service is better imagined as a network and thus only have users.
the idea is that you can build a generic 'REST' client and point it to almost anything that is a 'REST API' and it will work, in the same way we can now with web browsers
a REST client will never figure out /api/1/ - which is why resources go in the path and query parameters go in as, well, query parameters :)
from the server side you are right that modern front controllers and routing can make it all the same (heck, I can route resources to different subdomains, if I want), but a proxy does not know what the front controller logic is (ie. working backwards to what part is what).
> the idea is that you can build a generic 'REST' client and point it to almost anything that is a 'REST API' and it will work, in the same way we can now with web browsers
That's a stupid idea. REST is built on two things: content types and resource location. The content types indicate the meaning and location of the resources, if the client does not understand the content-type, it can not do anything.
You can not "build a generic 'REST' client and point it to almost anything that is a 'REST API' and it will work", that makes no sense. Indeed, that's not what web browsers do: the driver of the API on a website is the user, not the browser.
> a REST client will never figure out /api/1/ - which is why resources go in the path and query parameters go in as, well, query parameters :)
> That's a stupid idea. REST is built on two things: content types and resource location.
here is what my generic REST client looks like:
* HTTP[s]
* JSON
* Atom and RSS
* Caching layer
some controller login in between all of these. I inherit this to implement simple models and controllers in the client and just tell it which resources I want (like fields in a database)
how is that not a generic client?
for twitter I have classes called users and statuses, with each method defined. for google I have similar classes. this all took minutes for me to get working - I thought that was the entire point of REST?
read the original desertion, sec. 5.2.2 connectors:
Sorry sir, I'm not wrong, those would be perfectly valid as part of a REST service, including as alternatives to tptacek's description. Readability of the URLs has nothing at all to do with RESTfullness.
The only thing REST says about the data transmitted (the URL in this case) is that it's the container for the application's state, and most interpretations of this is that the URL is entirely opaque. And indeed, Fielding has suggested/explained this several times.
But just because it's valid doesn't mean it's in any way good. Those are horrible examples. So no, those URLs could not "just as well" be those nasty examples you give. They could, much more poorly, be those examples, and still be technically valid. By the same reasoning, "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo" is just as good english as Shakespeare.
PLEASE do not use the Wikipedia article as a citation. It's historically so full of misunderstandings that even Roy Felding himself has said it's nonsensical. This is a case where Wikipedia isn't quite "good enough" to let slip as a proper source for an argument.
Your arguments about nice looking URLs in fact have absolutely nothing to do with REST, except that REST says you must use your transport protocol according to its standards, where appropriate. It says nothing about how URLs should look. I you're talking about the HTTP spec, call it by its proper name please. We don't gain much from conflating different terms like this, it's just a source of confusion.
All query string parameters are used to identify the resource also. RFC 3986 states
"The query component contains non-hierarchical data that, along with data in the path component (Section 3.3), serves to identify a resource within the scope of the URI's scheme and naming authority (if any)."
There is absolutely no difference between path segments and query parameters from the perspective of resource identification.
A good rule of thumb here seems to be to put identity in the path and other things in the query string. For example, to search for items with property foo equal to 12 in the music collection known as bar, the url would be /music/bar?foo=12
There's a case to be made for relaxing the definition of REST to accept "just a bunch of URIs" APIs that don't follow the discoverability/transparency conventions of REST. But you can't call "here's a list of verbs" a REST interface.
What you'd want to see in a Rdio REST interface would be:
* /rdio/api/1/playlists * /rdio/api/1/playlist/foo * /rdio/api/1/artist/1 * /rdio/api/1/artist/1/tracks/2
&c &c &c. But, from what I can tell, none of the state inside Rdio's database is mapped to a URI.