Why does HATEOAS not specify a schema for the request body - api

A question for this already exists, but is more tech focused and doesnt have answers: Representing a request body on HATEOAS link
I like HATEOAS. I love using it in my frontend to check if I can perform some actions by checking if a link exists instead of having business logic.
But what I do not understand is how HATEOAS can truly be useful in other scenario's. What if you have an "AddItemToBasket" link which would need a request body with some properties in it. The frontend would still need to know what this request body looks like. But HATEOAS doesn't tell you this.
This means you still have a dependency on API knowledge. I think lots of applications solve this problem with generated API clients/graphql, but that makes HATEOAS a hard sell.
Why use HATEOAS if we can't use the URL and http method, because it doesn't offer the full picture.

REST builds on standards (uniform interface constraint) and currently there is no standard way to do this. There is a Hydra W3C WorkGroup writing a standard about how to describe Hypermedia APIs. They use RDF, standard vocabs like schema.org and you can write your API specific vocab they call documentation. As far as I understand their model you can give parameters in the documentation for operations represented by hyperlinks. You can use for example XSD to add constraints like numbers, etc. to the parameters. It takes a lot more effort than normally to write this kind of formal documentation and as far as I understand there are currently no general REST clients which could profit from these, so it does not make much sense currently to write such an API, but it is possible if you want to.
As of why to use HATEOAS, it makes your API flexible and backward compatible. For example if somebody does not have permission for an operation, you simply don't send a hyperlink for it in the response. You can always add new operations and the existing clients don't have to support them, they can just focus on what they already know and they won't break because something extra is added. They don't have to know about the URI structures and the methods, which can freely change if the only thing they depend on is the operation type and the parameters.

Related

How to generate functions from an API spec

I am very interested in Integration Platform as a Service.
As we know, it is possible to generate an API spec from an API. Is the opposite possible?
I want to write a piece of software that automatically create some functions for calling the endpoints of an Open API. In order to achieve that, the piece of software should consume an API spec and generate the code. Theoretically, this could be possible, if the spec covers all endpoints with all parameters of the API, etc. but:
this is often not the case
specs are written differently from each other
My question is: what should my software consume in order to get reliable information about the API endpoints, parameters etc.? Is there a standard for that? Is the API spec the way to go?
Look at Github Copilot. They're able to generate pretty descent facsimiles of functions from API spec. Just a word of caution, the function might not be 100% accurate and so you'll still need to check over it.
In a properly designed webservice there are at least 4 layers, presentation, application, domain and infrastructure.
REST documentation describes only the presentation layer's outer surface: HTTP interface and operations, so it is not possible to generate a complete webservice based on the REST API documentation. If your application does not have any kind of logic, it is just a data structure and CRUD, then it is possible, but in that case you'd better to find a database which has a REST API and very good access control and problem solved mostly.
If you have some sort of standard documentation like WADL or JSON-LD with Hydra, then you can generate a REST API skeleton for the presentation. I just googled the topic a little bit, maybe this thesis can be useful for you: https://repositorio.ul.pt/bitstream/10451/35311/1/ulfc121800_tm_Telmo_Santos.pdf

In the Diode library for scalajs, what is the distinction between an Action, AsyncAction, and PotAction, and which is appropriate for authentication?

In the scala and scalajs library Diode, I have used but not entirely understood the PotAction class and only recently discovered the AsyncAction class, both of which seem to be favored in situations involving, well, asynchronous requests. While I understand that, I don't entirely understand the design decisions and the naming choices, which seem to suggest a more narrow use case.
Specifically, both AsyncAction and PotAction require an initialModel and a next, as though both are modeling an asynchronous request for some kind of refreshable, updateable content rather than a command in the sense of CQRS. I have a somewhat-related question open regarding synchronous actions on form inputs by the way.
I have a few specific use cases in mind. I'd like to know a sketch (not asking for implementation, just the concept) of how you use something like PotAction in conjunction with any of:
Username/password authentication in a conventional flow
OpenAuth-style authentication with a third-party involved and a redirect
Token or cookie authentication behind the scenes
Server-side validation of form inputs
Submission of a command for a remote shell
All of these seem to be a bit different in nature to what I've seen using PotAction but I really want to use it because it has already been helpful when I am, say, rendering something based on the current state of the Pot.
Historically speaking, PotAction came first and then at a later time AsyncAction was generalized out of it (to support PotMap and PotVector), which may explain their relationship a bit. Both provide abstraction and state handling for processing async actions that retrieve remote data. So they were created for a very specific (and common) use case.
I wouldn't, however, use them for authentication as that is typically something you do even before your application is loaded, or any data requested from the server.
Form validation is usually a synchronous thing, you don't do it in the background while user is doing something else, so again Async/PotAction are not a very good match nor provide much added value.
Finally for the remote command use case PotAction might be a good fit, assuming you want to show the results of the command to the user when they are ready. Perhaps PotStream would be even better, depending on whether the command is producing a steady stream of data or just a single message.
In most cases you should use the various Pot structures for what they were meant for, that is, fetching and updating remote data, and maybe apply some of the ideas or internal models (such as the retry mechanism) to other request types.
All the Pot stuff was separated from Diode core into its own module to emphasize that they are just convenient helpers for working with Diode. Developers should feel free to create their own helpers (and contribute back to Diode!) for new use cases.

REST API design for cloning a resource [duplicate]

This question already has answers here:
What is the restful way to represent a resource clone operation in the URL?
(5 answers)
Closed 7 years ago.
I am writing a YAML document using swagger to design a RESTful API method for cloning a resource. I have a few options and don't know which would be best. Please can someone advise?
Options:
Relinquishing the responsibility of cloning the resource object to the consumer (where the consumer assigns values to properties on a new object and then creates a new object), the process would need to consist of two requests to the API: a GET against a resource for the source object and then a POST to that resource for creating the new one. This feels like the consumer has too much responsibility.
Using the WebDAV HTTP extensions which provides a COPY method (see here). It would appear that this is exactly what I would like for cloning. However, I would like to stick to the standard methods as much as possible
POSTing to /{resource}?resourceIdToClone={id} where resourceIdToClone is an optional parameter. This would conflict with an API path that I already have for creating the resource, where I add a schema to the POST body. It would mean using a POST to /{resource}/ for creating and cloning, and that would violate SRP.
Adding a new resource called 'CloneableResource' and performing a POST to /CloneableResource/{resource_type}/{resource_source_id}. For the example of cloning a sheep, you'd make a POST to /CloneableResource/Sheep/10. This way, it would be possible to stick to using the standard HTTP methods, there'd be no conflict with any other resource paths (or SRP violation). However, I would be adding a new and potentially superfluous type to the domain. I also can't think of a scenario when a consumer would want to perform anything other than a POST to this resource, so it seems like a code smell to me.
A GET against /resource/{id}?method=clone. One of the advantages here is that no additional resource is required and it may be determined by a simple optional querystring parameter. I'm aware that one of the risks here is that it can be dangerous to provide post or delete capabilities using a GET method if the URL is in a web page as it may be crawled by a search engine.
Thanks for any help!
Most of these options are perfectly good choices. A lot of it just your style choice in the end. Here are my comments on each of your options.
Relinquishing the responsibility of cloning the resource object to the consumer
In general I don't really have a problem with this solution. This option is very straight forward for a user to understand and implement. It might be better than coming up with some proprietary cloning functionality that your users have to learn how to use.
Using the WebDAV HTTP extensions which provides a COPY method
I like to stick to the standard methods as well. I would not use COPY, but I wouldn't appalled if you did.
POSTing to /{resource}?resourceIdToClone={id}
This is a perfectly good solution. From a REST standpoint, you don't really have a conflict with the rest of your API. The URI with a query parameter identifies a different resource than the URI without the query parameter. Query parameters are a URI feature for identifying resources that you can not be referenced hierarchically. However, it might be difficult to separate these in your code because of the way most REST frameworks work. You could do something similar to this except with a hierarchical URI such as /{resource}/clone. You could POST to this URI and pass the resource_source_id in the body.
Adding a new resource called 'CloneableResource' and performing a POST to /CloneableResource/{resource_type}/{resource_source_id}
There is nothing wrong with this approach from a REST standpoint, but I think adding a new type is both unnecessary and clutters the API. However, I disagree with your intuition there could be problem with having a resource that has only a POST operation. It happens. In the real world, not everything fits nicely into GET, PUT, or DELETE.
A GET against /resource/{id}?method=clone
This is the only option of the 5 that I can not condone. It seems from your description that you already understand why this is a bad idea, so I'm not sure why you are considering it. However, all you have to do to make this a good solution is to change GET to POST. It then becomes very similar to the #3 solution. The URI could also be hierarchical instead of using a query parameter. POST /resource/{id}/clone would work just as well.
I hope this was helpful. Good luck with your decision.
If you want to COPY a resource, then, yes, COPY is an obvious choice.
(and yes, it would be good to pull the definitions of COPY and MOVE out of RFC 4918 to untangle them from WebDAV).
Influenced by project requirements and the range of preferences amongst members in my team, option 1 will serve us best at this stage.
Conforming to the standard HTTP methods will simplify and clarify my API
There will be a single, consistent approach to cloning a resource. This outweighs the issue I have with designating the cloning work to the consumer.

Spring Data Rest Without HATEOAS

I really like all the boilerplate code Spring Data Rest writes for you, but I'd rather have just a 'regular?' REST server without all the HATEOAS stuff. The main reason is that I use Dojo Toolkit on the client side, and all of its widgets and stores are set up such that the json returned is just a straight array of items, without all the links and things like that. Does anyone know how to configure this with java config so that I get all the mvc code written for me, but without all the HATEOAS stuff?
After reading Oliver's comment (which I agree with) and you still want to remove HATEOAS from spring boot.
Add this above the declaration of the class containing your main method:
#SpringBootApplication(exclude = RepositoryRestMvcAutoConfiguration.class)
As pointed out by Zack in the comments, you also need to create a controller which exposes the required REST methods (findAll, save, findById, etc).
So you want REST without the things that make up REST? :) I think trying to alter (read: dumb down) a RESTful server to satisfy a poorly designed client library is a bad start to begin with. But here's the rationale for why hypermedia elements are necessary for this kind of tooling (besides the probably familiar general rationale).
Exposing domain objects to the web has always been seen critically by most of the REST community. Mostly for the reason that the boundaries of a domain object are not necessarily the boundaries you want to give your resources. However, frameworks providing scaffolding functionality (Rails, Grails etc.) have become hugely popular in the last couple of years. So Spring Data REST is trying to address that space but at the same time be a good citizen in terms of restfulness.
So if you start with a plain data model in the first place (objects without to many relationships), only want to read them, there's in fact no need for something like Spring Data REST. The Spring controller you need to write is roughly 10 lines of code on top of a Spring Data repository. When things get more challenging the story gets becomes more intersting:
How do you write a client without hard coding URIs (if it did, it wasn't particularly restful)?
How do you handle relationships between resources? How do you let clients create them, update them etc.?
How does the client discover which query resources are available? How does it find out about the parameters to pass etc.?
If your answers to these questions is: "My client doesn't need that / is not capable of doing that.", then Spring Data REST is probably the wrong library to begin with. What you're basically building is JSON over HTTP, but nothing really restful then. This is totally fine if it serves your purpose, but shoehorning a library with clear design constraints into something arbitrary different (albeit apparently similar) that effectively wants to ignore exactly these design aspects is the wrong approach in the first place.

A few questions about RESTful APIs and why some of the best-practices are rarely implemented

In most tutorials, documentation, articles etc. about RESTful I come across a few of the same points, but yet I rarely ever see these 'What makes it RESTful' points implemented.
For example, I've read this many times:
Content type
Using HTTP headers
Accept: application/json, text/plain
Extension in the URL
Not RESTful, URLs are not the place for Content-Type
I have never come across an API where I have seen this implemented. Every API I have ever used has always required me to append XML or JSON to the end of the URL. Are they doing it wrong?
Versioning
Version media types
application/vnd.something.v1+json
Custom header
X-API-Version: 1
Version in URL
/v1/resouce
Not RESTful, by putting the version in the URL you create separate resources
If you need to introduce non-backwards-compatible functionality surely creating a seperate resource is the correct thing to do?
Once again, in all versions of APIs I've used, they use v1, v2 in the URL (such as google, imgur etc.)
By not implementing these points, would my API not be considered RESTful?
To clarify these points would be much appreciated.
1) Using accept header or using format specific URLs are both valid in a RESTful system. The article you are citing is wrong.
2) Saying v1/resource is not RESTful is also incorrect. You cannot look at a URI and make a conclusion about its RESTfulness. Adding a v1 at the root of your URL is probably not a great thing to do if you are trying to incremental evolve your system. In effect it declares a whole new URL space and obsoletes the old one. That's pretty drastic. RESTFul systems try and enable incremental and evolutionary change to a system. So doing /resource/v2 is actually much more compatible with that goal.
The unfortunate phenomena at work here is that many developers who are learning about REST discover that the vast majority of systems out there that claim to be doing REST are not actually conforming to the constraints of REST. So they quickly develop a zeal for telling everyone what is and is not RESTful. Many of these people have not yet fully understood the constraints and end up making up new ones that don't exist. The "RESTFul URL" fallacy is a classic. "POST must create a resource" is another common one.
My guidance to anyone learning REST is, if someone tells you that something is not RESTful, you ask them what constraint it is violating and what is the practical impact of ignoring that constraint. If they can't answer that, then politely ignore them.
The true definition of REST is obviously in the doctoral dissertation written by Roy Fielding in 2002. Do all of the API's out there that call themselves RESTful follow the guidelines specified by Fielding? The answer is no. The definition of REST has been watered down by some to just mean anything that does not use SOAP. I would worry less about what is RESTful and more about what is good practices. It is a good practice to specify the content type in the header of the request. It is also a good practice to version your API's. A good resource for information on API best practices is from the guys at Apigee as they have a lot of experience in this area. Check out their webinar on RESTful API Design where they ask if you are a pragmatist or a RESTafarian.