RESTful way of referencing other resources in the request body - api

Let's assume that I have a resource called group with the following representation:
{
"id": 1,
"name": "Superheroes"
"_links": {
"self": {
"href": "http://my.api.com/groups/1"
}
}
}
Now let's say I want to create a new person instance by POSTing to /persons/1. Which of the following should I use for the request body:
Using ID
{
"name": "Batman",
"groupId": 1
}
Using link
{
"name": "Batman",
"group": "http://my.api.com/groups/1"
}
With the first method I access the id directly either to look up the related resource or eventually store the id in the database, when I persist the person instance. But with the other method, I either have to extract the id from the URI or, follow the link to load the related resource, and then find out its id. I really don't want to store the URI in the database.
With the latter option, seeing that the server controls the structure of the URI, is it fine for me to parse the id out of the link? Following the link back to the server itself seems odd, seeing that at this point we already have access to the information directly (we just need the id).
So to sum up, which of these options is best?
Use the id directly.
Use the link, but parse out the id.
Use the link, but access the link to get the resource instance, and then get the id.

TL;DR: Use simple ids.
More detailed explanation:
A straightforward approach is to create a person by POSTing to /groups/1/persons with a payload {"name": "Batman"}.
However, while such approach works for simple cases, the situation gets complicated if there are 2 resources that need to be referenced. Let's assume that a person also needs to belong to exactly one company:
GET /persons/1
{
"name": "Batman",
"group": 1, // Superheros, available at /groups/1
"company": 5 // Wayne Enterprises, available at /companies/5
}
Since there is no relationship between companies and groups, it is not semantically correct to create a person via POSTing to /groups/1/companies/5/persons or to /companies/5/groups/1/persons.
So let's assume you want to create a person with a request looking like this:
POST /persons
{
"name": "Batman"
"group": ???, // <--- What to put here?
"company": ??? // <--- What to put here?
}
Which brings us to the answer to your question:
Ease of use. Your API should be primarily designed for the ease of use. This is especially true, if you design a public API. Therefore, Option 2 (Use the link, but parse out the id) is out, since it imposes additional work for clients of your API.
Constructing search queries. If you want to be able to query persons which belong to the company 10 and the group 42, simple ids lead to more readable and less error-prone urls. Which of the following do you consider more readable?
URL with a simple id:
GET /groups/42?company=10
or URL with a url-encoded link:
GET /groups/42?company=http%3A%2F%2Fmy.api.com%2Fcompanies%2F10
I wouldn't underestimate the point of readability. How many times do you need to debug your API in various curls, logs, postmans, etc.
Development Links need to be parsed in the backend, while simple ids can be used directly. It's not about performance, but rather about additional work/tests you have to put in.
Endpoint maintenance. Imagine that your API endpoint evolves. You decide one day to switch to https or to include versioning in the url. This might break API clients, if they for some reason rely on structure of the links. Also, you might want to checkout if link parsing on your backend is done properly.
Argumentum ab auctoritate I know this is not a proper argument, but if you checkout APIs of large players, e.g. Twitter, Github or Stripe, they all use simple ids.
HATEOAS. One common argument in favour of links is that it is aligned with HATEOAS. However, as far as I know, this relates to additional links in API responses rather than using links in payloads of POST requests.
All in all, I would go for simple ids, since I haven't yet heard a compelling argument favouring links, which would beat the aforementioned.

You are missing two important things here.
You need a standard way to describe forms in the response, in this case your POST form.
The information about the group ids / uris, or how to get them has to be described in the form in a standard way.
For example a HTML FORM with a SELECT INPUT would be RESTful. The closest thing we got in json to do the same is json-ld and hydra. But if you are obsessed with hal, then use hyperagent forms or something like that. It will never be a standard, but if compatibility is not an issue, then it is good enough.
To answer your question, you should use the id, because the server knows how to interpret it. The client needs the resource identifiers, the server needs it only in the uri part of the request, not in the body.

From my experience, it is always best to go with the simplest solution for making requests.
The process of generating a new url and parsing it seems excessive to get a resource, whereas sending the id of the item you want seems much simpler.
Thus, I would send a request in the form:
{
"name": "Batman",
"group": 1
}

Related

Accessing site options via a REST API

I'm building a REST API powered SPA application and I'm trying to decide on the best way to deliver "global options" via the API. By global options I mean an assortment of random fields that relate to the application as a whole rather than being associated with one specific model, for example brand logos and contact details that need to be accessible from multiple locations within the app.
Something like Wordpress would store these in an options table and access them via a php function using the option name, however since this is a REST API I'm not sure how I would go about accessing/updating multiple options without making lots of requests for each one.
I know a lot of projects just use a json file to store this data, but it specifcally needs to be editable via a CMS and served via the API. The following are two methods that have come to mind, but none of them feel like complete solutions:
1: An options table with one generic endpoint that takes a query string specifying which fields you want to access. This works for getting data, however updating data seems to get a bit messy and the only way I can think to do this is by sending an object of key/values pairs to bulk create or update the options:
GET: example.com/api/options?pick=logo,contact_phone,contact_email
POST: example.com/api/options
{
contact_email: "info#example.com",
contact_address: "123 Test St"
}
PUT: example.com/api/options
{
contact_email: "info#example.com",
contact_address: "123 Test St"
}
2: Breaking fields into groups and storing them as a json field in a "pages" table, this solves the creating and updating issues but breaks down when you have fields are used in multiple locations, and they aren't really pages so it's not very REST like.
GET: example.com/api/pages/contact
POST: example.com/api/pages
{
name: "contact",
values: {
email: "info#example.com",
address: "123 Test St",
}
}
PUT: example.com/api/pages/contact
{
values: {
email: "info#example.com",
address: "123 Test St",
}
}
I also need to take into account the issue of access permissions, for example the logo field would be accessible to the public, but the user support contact number would only be accessible to logged in users. With the first example you could have an extra permission column for each option, but this wouldn't work for the second option.
I've been googling but have failed to find any good information about this topic as REST schemas/documention generally only deal with concreate entities, so any insight into how this is achieved in real world applciations would be great!
Cheers,
Cam

RESTful API HATEOAS

I've come to the conclusion that building a truly RESTful API, one the uses HATEOAS is next to impossible.
Every content I've come across either fails to illustrate the true power of HATEOAS
or simply does not explicitly mentions the inherent pain points with the dynamic nature of HATEOAS.
What I believe HATEOAS is all about:
From my understanding, a truly HATEOAS API should have ALL the information needed to interact with the API, and while that is possible, it is a nightmare to use especially with different stacks.
For example, consider a collection of resources located at "/books":
{
"items": [
{
"self": "/book/sdgr345",
"id": "sdgr345",
"name": "Building a RESTful API - The unspoken truth",
"author": "Elad Chen ;)",
"published": 1607606637049000
}
],
// This describes every field needed to create a new book
// just like HyperText Markup Language (i.e. HTML) rendered on the server does with forms
"create-form": {
"href": "/books",
"method": "POST",
"rel": ["create-form"],
"accept": ["application/x-www-form-urlencoded"],
"fields": [
{ "name": "name", "label": "Name", "type": "text", "max-length": "255", "required": true }
{ "name": "author", "label": "Author", "type": "text", "max-length": "255", "required": true }
{ "name": "author", "label": "Publish Date", "type": "date", "format": "dd/mm/YY", "required": true }
]
}
}
Giving the above response, a client (such as a web app) can use the "create-form" property to render an actual HTML form.
What value do we get from all this work?
The same value we've been getting from HTML for years.
Think about it, this is exactly what hypertext is all about, and what HTML has been designed for.
When a browser hits "www.pizza.com" the browser has no knowledge of the other paths that a user
can visit, it does not concatenate strings to produce a link to the order page -> "www.pizza.com/order", it simply renders anchors
and navigates when a user clicks them. This is what allows web developers to change the path from "/order" to "/shut-up-and-take-my-money" without changing any client (browsers).
The above idea is also true for forms, browsers do not guess the parameters needed to order a pizza, they simply render a form and its inputs, and handle its submission.
I have seen too many lines of codes in front-ends and back-ends alike, that build strings
like -> "https://api.com" + "/order" - You don't see browsers do that, right?
The problems with HATEOAS
Giving the above example ("/books" response), in order to create a new book, clients are expected to parse the response in order to leverage the true power of this RESTful API, otherwise, they risk assuming what the names of the fields are, which of them is required, what their expected type is, etc...
Now consider having two clients within your company that are using this API,
one for the web (browsers) written in JS, and another for the mobile (say an android app) written in Java. They can be published as SDK's, hopefully making 3 party consumers have an easier integration.
As soon as the API is used by clients outside your control, say a 3rd party developer with an affinity to python, with the purpose of creating a new book.
That developer is REQUIRED to parse such a response, to figure out what the parameters are, their name, the URL to send inputs to, and so on.
In all my years of developing I have yet to come across an API such as the one I have in mind.
I have a feeling this type of API is nothing more than a pipe dream, and I was hoping to understand whether my assumptions are correct, and what downfalls it brings before starting the implementation phase.
P.S
in case it's not clear, this is exactly what HATEOAS compliant API is all about - when the fields to create a book change clients adapt without breaking.
On the Hypermedia Maturity Model (HMM), the example you give is at Level 0. At this level, you are absolutely correct about the problems with this approach. It's a lot of work and developers are probably going to ignore it and hard-code things anyway. However, with a generic hypermedia enabled media type, not only does all that extra work go away, it actually reduces the work for developers.
Let's take a step back for a moment and consider how the web works. There are three main components: the web server, the web browser, and the driver (usually a human user). The web server provides HTML, which the web browser executes to present a graphical user interface, which the driver can use to follow links and fill out forms. The browser uses the HTML to completely abstract from the driver all the details about how to present the form and how to send it over HTTP.
In the API world, this concept of the generic browser that abstracts away the media type and HTTP details hasn't taken hold yet. The only one I know about that is both active and high quality is Ketting. Using a browser like Ketting, removes all of that extra work the developer would have to put into making use of all that hypermedia. A hypermedia browser is like an SDK that API vendors often provide except that it works for any API. In theory, you can link from one API to another completely unrelated API. APIs would no longer be islands, they would become a web.
The thing that makes hypermedia browsers possible are general purpose hypermedia enabled media types. HTML is of course the most successful and famous example, but there are JSON based media types as well. Some of the more widely used examples are HAL and Siren.
The higher level a media type is on the Hypermedia Maturity Model, the more that a generic browser can do to abstract way the media-type, URIs, and HTTP details. Here's a brief explanation. Checkout the blog post linked above for more details and examples.
Level 0: At this level, hypermedia is encoded in an ad-hoc way. A browser can't do much with this because every API might encode things a little different. At best a browser can use heuristics or AI to guess that something is a link or a form and treat it as such, but generally HMM Level 0 media-types are intended for developers to read an interpret. This leads to many of the challenges you identified your question. Examples: JSON and XML.
Level 1: At this level, links are a first class feature. The media type has a well defined way to represent a link. A browser knows unambiguously what is to be interpreted as a link and can provide an interface to follow that link without the user needing to be concerned about URI or HTTP. This is sufficient for read-only APIs, but if we need the user to provide input, we don't have a way to represent a form-like hypermedia control. It's up to a human to read the documentation or an ad-hoc form representation to know how to submit data. Examples: HAL, RESTful JSON.
Level 2: At this level, forms (or form-like controls) are a first class feature. A media type has a well defined way of representing a form-like hypermedia control. A browser can use this media type to do things like build an HTML Form, validate user input, encode user input into an acceptable media type, and make an HTTP request using the appropriate HTTP method. Let's say you want to change your API to support PATCH and you prefer that application start using it over PUT. If you are using a HMM Level 2 media type, you can change method in the representation and any application that uses a hypermedia browser that knows how to construct a PATCH request will start sending PATCHs instead of PUTs without any developer intervention. Without a HMM Level 2 media type, you're stuck with those PUTs until you can get all the applications that use your API to update their code. Examples: HTML, HAL-Forms, Siren, JSON Hyper-Schema, Uber, Mason, Collection+JSON.
Level 3: At this level, in addition to hypermedia controls, data are also self-describing. Remember those three main components I mentioned? The last one, "driver", is the major difference between using hypermedia on the web and using hypermedia in an API. On the web, the driver is a human (excluding crawlers for simplicity), but with an API, the driver is an application. Humans can interpret the meaning of what they are presented with and deal with changes. Applications may act on heuristics or even AI, but usually they are following a fixed routine. If something changes about the data that the application didn't expect, the application breaks. At this level, we apply semantics to the data using something like JSON-LD. This allows us to construct drivers that are better at dealing with change and can even make decisions without human intervention. Examples: Hydra.
I think the only downside to choosing to use hypermedia in your API is that there aren't production-ready HMM Level 2 hypermedia browsers available in most languages. But, the good news is that one implementation will cover any API that uses a media type it supports. Ketting will work for any API and application that is written in JavaScript. It would only take a few more similar implementations to cover all the major languages and choosing to use hypermedia would be an easy choice.
The other reason to choose hypermedia is that it helps with the API design process. I personally use JSON Hyper-Schema to rapid-prototype APIs and use a generic web client to click through links and forms to get a feel for API workflows. Even if no one else uses the schemas, for me, it's worth having even if just for the design phase.
Implementing a HATEOAS API needs to be done both on the server and on the clients, so this point you make in the comments is very valid indeed:
changing a resource URI is risky given I don't believe clients actually "navigate" the API
Besides the World Wide Web, which is the best implementation of HATEOAS, I only know of the SunCloud API on the now decommissioned Project Kenai. Most APIs out there don't quite make use of HATEOAS, but are just a bunch of documented URLs where you can get or submit specific resources (basically, instead of being "hypermedia driven", they are "documentation driven"). With this type of API, clients don't actually navigate the API, they concatenate strings to go at specific endpoints where they know they can find specific resources.
If you expose a HATEOAS API, developers of any clients can still look at the links you return and may decide to build them on their own 'cause they figure what the API is doing, so then think they can just bypass any other navigation that might be needed and go straight for the URL, because it is always /products/categories/123, until - of course - it isn't anymore.
A HATEOAS API is more difficult to build and adds complexity to both the server and the clients, so when deciding to build one, the questions are:
do you need the flexibility HATEOAS is giving you to justify the extra complexity of the implementation?
do you want to make it easier or harder for clients to consume your API?
does the knowledge and discipline to make it all work exist on both sides (server developers and clients developers)?
Most of the times, the answer is no. In addition, many times the questions don't even get asked, instead people end up with the approach that is more familiar because they've seen the sort of APIs people build, have used some, or built some before. Also, many times, a REST API is just stuck in front of a database and doesn't do much but expose data from the database as JSON or XML, so not that much need for navigation there.
There is no one forcing you to implement HATEOAS in your API, and no one preventing you to do so either. At the end of the day, it's a matter of deciding if you want to expose your API this way or not (another example would be, do you expose your resources as JSON, XML, or let the client chose the content type?).
In the end, there is always the chance of breaking your clients when you make changes to your API (HATEOAS or no HATEOAS), because you don't control your clients and you can't control how knowledgeable the client developer are, how disciplined, or how good of a work they do in implementing someone else's API.

REST API responses based on authentication, best practices?

I have an API with endpoint GET /users/{id} which returns a User object. The User object can contain sensitive fields such as cardLast4, cardBrand, etc.
{
firstName: ...,
lastName: ...,
cardLast4: ...,
cardBrand: ...
}
If the user calls that endpoint with their own ID, all fields should be visible. However, if it is someone elses ID then cardLast4 and cardBrand should be hidden.
I want to know what are the best practices here for designing my response. I see three options:
Option 1. Two DTOs, one with all fields and one without the hidden fields:
// OtherUserDTO
{
firstName: ...,
lastName: ..., // cardLast4 and cardBrand hidden
}
I can see this becoming out of hand with DTOs based on role, what if now I have UserDTOForAdminRole, UserDTOForAccountingRole, etc... It looks like it quickly gets out of hand with the number of potential DTOs.
Option 2. One response object being the User, but null out the values that the user should not be able to see.
{
firstName: ...,
lastName: ...,
cardLast4: null, // hidden
cardBrand: null // hidden
}
Option 3. Create another endpoint such as /payment-methods?userId={userId} even though PaymentMethod is not an entity in my database. This will now require 2 api calls to get all the data. If the userId is not their own, it will return 403 forbidden.
{
cardLast4: ...,
cardBrand: ...
}
What are the best practices here?
You're gonna get different opinions about this, but I feel that doing a GET request on some endpoint, and getting a different shape of data depending on the authorization status can be confusing.
So I would be tempted, if it's reasonable to do this, to expose the privileged data via a secondary endpoint. Either by just exposing the private properties there, or by having 2 distinct endpoints, one with the unprivileged data and a second that repeats the data + the new private properties.
I tend to go for option 1 here, because an API endpoint is not just a means to get data. The URI is an identity, so I would want /users/123 to mean the same thing everywhere, and have a second /users/123/secret-properties
I have an API with endpoint GET /users/{id} which returns a User object.
In general, it may help to reframe your thinking -- resources in REST are generalizations of documents (think "web pages"), not generalizations of objects. "HTTP is an application protocol whose application domain is the transfer of documents over a network" -- Jim Webber, 2011
If the user calls that endpoint with their own ID, all fields should be visible. However, if it is someone elses ID then cardLast4 and cardBrand should be hidden.
Big picture view: in HTTP, you've got a bit of tension between privacy (only show documents with sensitive information to people allowed access) and caching (save bandwidth and server pressure by using copies of documents to satisfy more than one request).
Cache is an important architectural constraint in the REST architectural style; that's the bit that puts the "web scale" in the world wide web.
OK, good news first -- HTTP has special rules for caching web requests with Authorization headers. Unless you deliberately opt-in to allowing the responses to be re-used, you don't have to worry the caching.
Treating the two different views as two different documents, with different identifiers, makes almost everything easier -- the public documents are available to the public, the sensitive documents are locked down, operators looking at traffic in the log can distinguish the two different views because the logged identifier is different, and so on.
The thing that isn't easier: the case where someone is editing (POST/PUT/PATCH) one document and expecting to see the changes appear in the other. Cache-invalidation is one of the two hard problems in computer science. HTTP doesn't have a general purpose mechanism that allows the origin server to mark arbitrary documents for invalidation - successful unsafe requests will invalidate the effective-target-uri, the Location, the Content-Location, and that's it... and all three of those values have other important uses, making them more challenging to game.
Documents with different absolute-uri are different documents, and those documents, once copied from the origin server, can get out of sync.
This is the option I would normally choose - a client looking at cached copies of a document isn't seeing changes made by the server
OK, you decide that you don't like those trade offs. Can we do it with just one resource identifier? You immediately lose some clarity in your general purpose logs, but perhaps a bespoke logging system will get you past that.
You probably also have to dump public caching at this point. The only general purpose header that changes between the user allowed to look at the sensitive information and the user who isn't? That's the authorization header, and there's no "Vary" mechanism on authorization.
You've also got something of a challenge for the user who is making changes to the sensitive copy, but wants to now review the public copy (to make sure nothing leaked? or to make sure that the publicly visible changes took hold?)
There's no general purpose header for "show me the public version", so either you need to use a non standard header (which general purpose components will ignore), or you need to try standardizing something and then driving adoption by the implementors of general purpose components. It's doable (PATCH happened, after all) but it's a lot of work.
The other trick you can try is to play games with Content-Type and the Accept header -- perhaps clients use something normal for the public version (ex application/json), and a specialized type for the sensitive version (application/prs.example-sensitive+json).
That would allow the origin server to use the Vary header to indicate that the response is only suitable if the same accept headers are used.
Once again, general purpose components aren't going to know about your bespoke content-type, and are never going to ask for it.
The standardization route really isn't going to help you here, because the thing you really need is that clients discriminate between the two modes, where general purpose components today are trying to use that channel to advertise all of the standardized representations that they can handle.
I don't think this actually gets you anywhere that you can't fake more easily with a bespoke header.
REST leans heavily into the idea of using readily standardizable forms; if you think this is a general problem that could potentially apply to all resources in the world, then a header is the right way to go. So a reasonable approach would be to try a custom header, and get a bunch of experience with it, then try writing something up and getting everybody to buy in.
If you want something that just works with the out of the box web that we have today, use two different URI and move on to solving important problems.

Some general restful api design questions

A few general design questions:
Give the example here:
https://developers.google.com/+/api/latest/activities/list#nextPageToken
Why would the server return a token to retreive the next paginated result? Doesn't this break the idea of being stateless?
Why not just pass a MySQL like LIMIT name=value as the parameters? The server now has to return the number of pages I suppose...what am I missing?
I read many but this one was of interest:
REST Web Services API Design
The second reply, offers the following examples.
GET http://api.domain.com/user/<id>
GET http://api.domain.com/users
PUT http://api.domain.com/user/<id>
POST http://api.domain.com/users
DELETE http://api.domain.com/user/<id>
Makes sense but why are there two plural resources? Could one not assume that if "user" is queried and was NULL or not provided that "all" was intended? Likewise for POST? If plural is for improved readability - why is there not a "users" resource for DELETE?
Ultimately, I understand REST to mean...representation of a single resource - using HTTP verbs (GET, PUT, POST, DELETE) to essentially manage that resource - similar to CRUD.
EDIT | Lastly I also wanted to ask why Google API sends the API version in the URI instead of using HTTP headers? Is there a reason? For backwards compat with older clients?
Comments?
Why would the server return a token to retrieve the next paginated result? Doesn't this break the idea of being stateless?
Using this kind of mechanism for paginated result sets is completely standard and does not break the idea of being stateless. Consider the following example.
Suppose GET /users?after=<after> (where after is optional) is supposed to return the list of all users in a paginated fashion, say <= 4 per page.
The first request a client makes is GET /users with a response that might look like the following (formatted as JSON).
{
"users": [ "alex", "bob", "carter", "dan" ]
"more_after": "dan"
}
In this example, the more_after property designates there may be more users left in the user list. So the client then requests GET /users?after=dan and gets a second response that looks like the following.
{
"users": [ "edward", "frank" ]
}
The absence of the more_after property designates that this is the last page of users.
Now the question is: was the "dan" token used as the page separator something that breaks the "statelessness" property we want? Clearly the answer is no. The server doesn't have to remember anything between the two GET requests. There's no concept of a session. Any state that needs to persist between the two GET requests exists only client-side - that's the important distinction. It's completely acceptable - and often required - to have the client persist state between calls to the service.

Retrieve the list of friends that did a custom action on a custom object in open graph 2

I would like to do something like facepile using the graph api with open graph 2 actions : from a custom object and a custom object, give me the friends (using my facebook application) that did this action on this object.
The problem is that using FQL, I cannot query custom objects and actions. Using the graph API, I cannot find a way to intersect the list of my friends with the object I'm interested in.
The best I could do was the following using the batch mode of the graph API :
batch=[
// First we get the list of friends that are using my facebook application
{ "method": "GET", "relative_url": "fql?q=SELECT+uid+FROM+user+WHERE+uid+IN+(SELECT+uid1+FROM+friend+WHERE+uid2=me())+AND+is_app_user=1+LIMIT+0,49", "name": "friends"},
// Then query each friend to get the list of objects that went through my namespace:testaction
{ "method": "GET", "relative_url": "{result=friends:$.data.0.uid}/namespace:testaction" },
{ "method": "GET", "relative_url": "{result=friends:$.data.1.uid}/namespace:testaction" },
...
{ "method": "GET", "relative_url": "{result=friends:$.data.49.uid}/namespace:testaction" }
]
It's quite inefficient and does not fully resolve my issue since :
I still have to filter the results to get only the one that matches
the object I want
If there is a large number of objects in namespace:testaction, I have to go through paging, doing more queries (I try to minimize the number of queries)
Do you see a better way to do this ?
This probably isn't exactly what you're looking for, but given the fact that facebook (AFAIK) doesn't provide (and will probably never provide) the ability to do this. I think you should simply store the information yourself and then query the data from your own database. It would be like what you're doing in your question, but you can optimize it since it's your database.
I'm sure you thought about this already, but someone had to say it.
It's now possible to do this with one Graph API request:
GET https://graph.facebook.com/me/friends?limit=50&fields=name,namespace:testaction.limit(100)
see field expansion and updates to the graph API.
If the answer derickito gave is not enough, you should explore getting your app on the Facebook white-list (aka become a partner) to get at some the private Graph API where this functionality might exist, but is not available for "normal" application that are stuck using the public Graph API.