Spring Data Rest Without HATEOAS - spring-data-rest

I really like all the boilerplate code Spring Data Rest writes for you, but I'd rather have just a 'regular?' REST server without all the HATEOAS stuff. The main reason is that I use Dojo Toolkit on the client side, and all of its widgets and stores are set up such that the json returned is just a straight array of items, without all the links and things like that. Does anyone know how to configure this with java config so that I get all the mvc code written for me, but without all the HATEOAS stuff?

After reading Oliver's comment (which I agree with) and you still want to remove HATEOAS from spring boot.
Add this above the declaration of the class containing your main method:
#SpringBootApplication(exclude = RepositoryRestMvcAutoConfiguration.class)
As pointed out by Zack in the comments, you also need to create a controller which exposes the required REST methods (findAll, save, findById, etc).

So you want REST without the things that make up REST? :) I think trying to alter (read: dumb down) a RESTful server to satisfy a poorly designed client library is a bad start to begin with. But here's the rationale for why hypermedia elements are necessary for this kind of tooling (besides the probably familiar general rationale).
Exposing domain objects to the web has always been seen critically by most of the REST community. Mostly for the reason that the boundaries of a domain object are not necessarily the boundaries you want to give your resources. However, frameworks providing scaffolding functionality (Rails, Grails etc.) have become hugely popular in the last couple of years. So Spring Data REST is trying to address that space but at the same time be a good citizen in terms of restfulness.
So if you start with a plain data model in the first place (objects without to many relationships), only want to read them, there's in fact no need for something like Spring Data REST. The Spring controller you need to write is roughly 10 lines of code on top of a Spring Data repository. When things get more challenging the story gets becomes more intersting:
How do you write a client without hard coding URIs (if it did, it wasn't particularly restful)?
How do you handle relationships between resources? How do you let clients create them, update them etc.?
How does the client discover which query resources are available? How does it find out about the parameters to pass etc.?
If your answers to these questions is: "My client doesn't need that / is not capable of doing that.", then Spring Data REST is probably the wrong library to begin with. What you're basically building is JSON over HTTP, but nothing really restful then. This is totally fine if it serves your purpose, but shoehorning a library with clear design constraints into something arbitrary different (albeit apparently similar) that effectively wants to ignore exactly these design aspects is the wrong approach in the first place.

Related

What are the benefits of routers when the URI can be parsed dynamically?

I'm trying to make an architectural decision and I'm worried that I'm missing something big about URL routing / mapping when it comes to designing a basic REST API / framework.
Creating routing classes and such that is typically seen in REST API frameworks, that require one to manually map a URL to a class, and a class method (action), kind of seems like a failure to encapsulate the problem. When this can all be determed by parsing the URL dynamically and having an automatic router or front page controller.
GET https://api.example.com/companies/
Collection resource that gets a list of all companies.
GET https://api.example.com/companies/1
Fetches a single company by ID.
Which all seems to follow the template:https://api.example.com/<controller>/<parameter>/
Benefit 1: URL Decoupling and Abstraction
I assume one of the on paper benefits of having a typical routing class, is that you can decouple or abstract a URL from a resource / physical class. So you could have arbitrary URL's like GET https://api.example.com/poo/ instead of GET https://api.example.com/companies/ that fetches all the companies if you felt like it.
But in almost every example and use-case I've seen, the desire is to have a URL that matches the desired controller, action and parameters, 1 : 1.
Another possible benefit, is that collection resources within a resource, or nested resources, might be easier to achieve with URL mapping and typical routers. For example:
GET https://api.example.com/companies/1/users/
OR
GET https://api.example.com/companies/1/users/1/
Could be quite challenging to come up with a paradigm that can dynamically parse this to know what controller to call in order to get the data, what parameters to use, and where to use them. But I think I have come up with a standard way that could make this work dynamically.
Whereas manually mapping this would be easy.
I could just re-route GET https://api.example.com/companies/1/users/ to the users controller instead of the companies controller, bypassing it, and just set the parameter "1" to be the company id for the WHERE clause.
Benefit 1.1: No Ties to Physical Paths
An addendum to benefit 1, would be that a developer could completely change the URL scheme and folder structure, without affecting the API, because everything is mapped abstractly. If I choose to move files, folders, classes, or rename them, it should just be a matter of changing the mapping / routing.
But still don't really get this benefit either, because even if you had to move your entire API to another location, a trivial change in .htaccess with fix this immediately.
So this:
GET https://api.example.com/companies/
TO
GET https://api.example.com/v1/companies/
Would not impact code, even in the slightest. Even with a dynamic router.
Benefit 2: Control Over What Functionality is Exposed
Another benefit I imagine a typical router class gives you, over a dynamic router that just interprets and parses the URL, is control over exactly what functionality you want to expose to the API consumer. If you just do everything dynamically, you're kind of dropping your pants, automatically giving your consumer access to the entire system.
I see this as a possible benefit for the dynamic router, as you wouldn't then have to manually define and map all your routes to resources. It's all there, automatically. To solve the exposure problem, I would probably do the opposite by defining a blacklist of what functionality the API consumer shouldn't be allowed to use. I might be more time effective, defining a blacklist, then defining each and every usable resource with mapping. Then again, it's riskier too I suppose. You could even do a whitelist... which is similar to a typical router, but you wouldn't need any extended logic at all. It's just a list of URL's that the system would check, before passing the URL to the dynamic router. Or it could just be a private property of the dynamic router class.
Benefit 3: When HTTP Methods Don't Quite Fit the Bill
One case where I see a typical routers shining, is where you need to execute an action, that conflicts with an existing resource. Let me explain.
Say you want to authenticate a user, by running the login function within your user class. But now, you can't execute POST https://api.example.com/users/ with credentials, because that is reserved for adding a new user. Instead, you need to somehow run the login method in your user class. You don't want to use POST https://api.example.com/users/login/ either, because then you're using verbs other than the HTTP methods. However, with a typical router, you can just map this directly, as said before. Easy.
url => "https://api.example.com/tenant/"
Controller => "users"
Action => "login"
Params => "api_key, api_secret"
But, once again, I see an plausible alternative. I could just create another controller, called login or tenant, that instantiates my user controller, and runs the login function. So a consumer could just POST https://api.example.com/tenant/, with credentials, and blam. Authentication.
Although, to get this alternative to work, I would have to physically create another controller, when with a URL mapper, I wouldn't need to. But this seperation of concerns, functionality and resources, is quite nice too. But, maybe that's the main trade off, would you rather just define a URL route, or have to create new classes for each nuance you encounter?
What am I not seeing, or understanding? Am I missing a core concept here and just ignorant? Are there more benefits to having typical URL mapping and routing classes and functionality, that I'm just not aware of, or have I pretty much got this?
A lot of the benefits to routing you describe are correct, and some of what you say about physical mappings is also true. I'd like to throw in some experience / practical information that colored my opinion on routers over the last few years.
first of all, the dynamic parsing of url works well (most of the time) when you architect your application according to the MVC design pattern. For example, I once built a very large application using Kohana, which is a hierarchical MVC framework, which allows you to extend controllers and models for the sake of making nested urls. In general, this makes a lot of sense. But there were a lot of times where it simply didn't make much sense to go build a whole class and model system around the need for one-off URLs to make the application more functional. But there are also times where MVC is not the design pattern you're using, and thus it is not the defining feature of your API, and routing is beautiful in this scenario. One could easily see this issue at work by playing with frameworks that have a lot of structural freedom, such as Slim framework or Express.js.
more often than people think, a fully functional API will have an element of RPC-ness to it in addition to the primarily RESTful endpoints available for consumption. And not only do those additional functionalities make more sense as a consumer when they're decorating existing resource method mappings. This tends to happen after you've built out most of your application and covered most of your bases, and then you realize that there are a couple little features you'd like to add in relation to a resource that isn't doesn't cleanly fit into the CREATE / READ / UPDATE / DELETE categories. you'll know it when you see it.
it really can not be understated, it is much safer to not go hacking on the actual structure of the controllers and models, adding, removing, changing, things for the sole purpose of adding an endpoint that isn't inherently following the same rules of the other controller methods (API endpoints).
another very important thing is that your API endpoints are actually more maleable than we often realize. What I mean is, you can be OK with the structure of your endpoints on monday, and then on friday,you get this task sent down from above saying you need to change all of these API calls to be of some other structure, and thats fine. but if you have a large application, this requires a very, very significant amount of file renaming, class renaming, linkages, and all sorts of very breakable code when the framework you're using has strict rules for class naming, file naming, physical file path structure and stuff like that...just imagine, changing a class name to make it work with the new structure, and now you've got to go hunt down every line of code that instantiated the old class, and change it. Furthermore, in that scenario, it could be said that the problem is that your code is tightly coupled with the url structure of your API, and that is not very maintainable should your url needs change.
Either way, you really ought to decide whats best for the particular application. but the more you use routers, the more you'll see why they're so useful.

Where does the business logic go in WCF Data service using entity framework?

I was looking at how you can create a WCF data service around an entity framework context and you can consume it as an EF context as well.
Creating an OData API for StackOverflow including XML and JSON in 30 minutes
I really just started looking at this, but I was wondering where would the business logic go? As a service I would expect that you couldn't just freely add/delete etc without it having some validation.
If I wrote an MVC app to consume this service, how would I best implement business logic. Not simple property level validation that you could do with attributes, but more complex stuff that needs to check the data store first etc.
It sounds like you need a custom data service provider (msdn link). They're quite powerful and give you full control over all of your reading/writing logic.
For example I wrote one that enforces our licensing logic in the update provider.
You can put some in the Data Service class, but you are limited as to what you can and can't do there. And then of course you can put some in the client above the service, but that's not ideal either.
I've only spent a few weeks with WCF Data Services but you highlight (one of) the big problems with it - lack of flexibility. It's fantastic for rapid development and banging out LOB applications, but anything with a deliberate design is very difficult to implement. I needed to include objects in my entity model simply to allow them to be exposed through the service, and I had huge headaches just trying to extend those classes with a simple property.
I'd only recommend using WCF Data Services for trivially simple applications that needed extremely fast development - a one or two day development cycle, for example. Anything else is worth doing thoroughly with regular WCF services, writing your own data layer and so on.
Depending on your specific needs, it sounds like Web API might be a good fit. Web API may never get the full range of OData support that WCF Data Services has, but it does make certain things easier (like adding business logic). I'm quite confident that Web API's initial support for OData will cover a significant number of use cases, and that support will grow over time.
While a custom data provider will most certainly do about anything you want and may well be a great solution for you if you have a complex architecture, I wasn't really thrilled when I attempted to save back through the client and found out I had to implement my the IUpdatable Interface as part of my Context.(I was attempting to build a repository pattern out of my context and DataService).
I'm sure it's very useful for many people, but I really only needed the functionality the EntityProvider already contained and didn't have the time in my project schedule to figure the Iupdatable piece of the custom provider out, so my team, specifically Geoff , stuck with the Entity Provider and used Change and Query Interceptors to route the DataService requests through our Business Logic classes on the server. It provides a central point of control. We used these to provide security checks, run calculations and other operations on Insert/Update, etc. Turned out great. You can also use service methods as another way to provide specific business logic functions to your clients.

What logic should go in the domain class and what should go in a service in Grails?

I am working on my first Grails application, which involves porting over an old struts web appliction. There is a lot of existing functionality, and as I move things over I'm having a really hard time deciding what should go in a service and what should be included directly on the model?
Coming from a background of mostly Ruby on Rails development I feel strongly inclined to put almost everything in the domain class that it's related to. But, with an application as large as the one I'm porting, some classes will end up being thousands upon thousands of lines long.
How do you decide what should go in the domain versus what should go in a service? Are there any established best practices? I've done some looking around, but most people just seem to acknowledge the issue.
In general, the main guideline I follow is:
If a chunk of logic relates to more than one domain class, put it in a service, otherwise it goes in the domain class.
Without more details about what sort of logic you are dealing with, it's hard to go much deeper than that, but here's a few more general thoughts:
For things that relate to view presentation, those go in tag libs (or perhaps services)
For things that deal with figuring out what to send to a view and what view to send it to, that goes in the controllers (or perhaps services)
For things that talk to external entities (ie. the file system, queues, etc.), those go in services
In general, I tend to err on the side of having too many services than having things too lumped together. In the end, it's all about what makes the most sense to you and how you think about the code and how you want to maintain it.
The one thing I would look out for, however, is that there's probably a high level of code duplication in what you are porting over. With Grails being built on Groovy and having access to more powerful programming methods like Closures, it's likely that you can clean up and simplify much of your code.
I personally think it is not only a decision between service and domain class but also plugins or injected code.
Think of the dynamic finders like Book.findAllByName(). It's great that Grails injects them into the domain classes. Otherwise they would have to be copied to each domain class or they would be callable through a service (dynamicFinderService.findAllByName('Book') -argh).
So in my projects, I have quite a few things which I will move to a plugin and inject into the domain classes...
From a Java EE point of view: No logic at all within domain classes. Keep your domain classes as easy and short as possible. Your domain model is the core of your application and must be easy to get.
Grails is developed for dependency injection. All logic which resides in domain class is hard to test, and to too easy to refactor.
You cannot exchange implementations like with services (DI).
You cannot test the code of your domain class easily.
You cannot scale up your system, since your implementations cannot be pooled like services.
My recommendation: Keep every logic in services. Services are easy to test, easy to reuse, easy to maintain, easy to re-configure.
Specific to grails: if you change your domain class, the in-memory-DB will be killed, so you lose your test data, which prevents you from rapid prototyping.

traversing object graph from n-tier client

I'm a student currently dabbling in a .Net n-tier app that uses Nhibernate+WCF+WPF.
One of the things that is done quite terribly is object graph serialisation, In fact it isn't done at all, currently associations are ignored and we are using DTOs everywhere.
As far as I can tell one method to proceed is to predefine which objects and collections should be loaded and serialised to go across the wire, thus being able to present some associations to the client, however this seems limited, inflexible and inconsistent (can you tell that I don't like this idea).
One option that occurred to me was to simply replace the NHProxies that lazy load collection on the client tier with a "disconnectedProxy" that would retrieve the associated stuff over the wire. This would mean that we'd have to expand our web service signature a little and do some hackery on our generated proxies but this seemed like a good T4/other code gen experiment.
As far as I can tell this seems to be a common stumbling block but after doing a lot of reading I haven't been able to figure out any good/generally accepted solutions. I'm looking for a bit of direction as much as any particular solution, but if there is an easy way to make the client "feel" connected please let me know.
You ask a very good question that unfortunately does not have a very clean answer. Even if you were able to get lazy loading to work over WCF (which we were able to do) you still would have issues using the proxy interceptor. Trust me on this one, you want POCO objects on the client tier!
What you really need to consider...what has been conceived as the industry standard approach to this problem from the research I have seen, is called persistence vs. usage or persistence ignorance. In other words, your object model and mappings represent your persistence domain but it does not match your ideal usage scenarios. You don't want to bring the whole database down to the client just to display a couple properties right??
It seems like such a simple problem but the solution is either very simple, or very complex. On one hand you can design your entities around your usage scenarios but then you end up with proliferation of your object domain making it difficult to maintain. On the other, you still want the rich object model relationships in order to write granular business logic.
To simplify this problem let’s examine the two main gaps we need to fill…between the database and the database/service layer and the service to client gap. NHibernate fills the first one just fine by providing an ORM to load data into your objects. It does a decent job, but in order to achieve great performance it needs to be tweaked using custom loading strategies. I digress…
The second gap, between the server and client, is where things get dicey. To simplify, imagine if you did not send any mapped entities over the wire to the client? Try creating a mechanism that exchanges business entities into DTO objects and likewise DTO objects into business entities. That way your client deals with only DTOs (POCO of course), and your business logic can maintain its rich structure. This allows you to leverage not only NHibernate’s lazy loading mechanism, but other benefits from the session such as L1 cache.
For brevity and intellectual property reasons I will not go into the design of said mechanism, but hopefully this is enough information to point you in the right direction. If you don’t care about performance or latency at all…just turn lazy loading off all together and work through the serialization issues.
It has been a while for me but the injection/disconnected proxies may not be as bad as it sounds. Since you are a student I am going to assume you have some time and want to muck around a bit.
If you want to inject your own custom serialization/deserialization logic you can use IDataContractSurrogate which can be applied using DataContractSerializerOperationBehavior. I have only done a few basic things with this but it may be worth looking into. By adding some fun logic (read: potentially hackish) at this layer you might be able to make it more connected.
Here is an MSDN post about someone who came to the same realization, DynamicProxy used by NHibernate makes it not possible to directly serialize NHibernate objects doing lazy loading.
If you are really determined to transport the object graph across the network and preserve lazy loading functionality. Take a look at some code I produced over here http://slagd.com/?page_id=6 . Basically it creates a fake session on the other side of the wire and allows the nhibernate proxies to retain their functionality. Not saying it's the right way to do things, but it might give you some ideas.

Consuming web services from Oracle PL/SQL

Our application is interfacing with a lot of web services these days. We have our own package that someone wrote a few years back using UTL_HTTP and it generally works, but needs some hard-coding of the SOAP envelope to work with certain systems. I would like to make it more generic, but lack experience to know how many scenarios I would have to deal with. The variations are in what namespaces need to be declared and the format of the elements. We have to handle both simple calls with a few parameters and those that pass a large amount of data in an encoded string.
I know that 10g has UTL_DBWS, but there are not a huge number of use-cases on-line. Is it stable and flexible enough for general use? Documentation
I have used UTL_HTTP which is simple and works. If you face a challenge with your own package, you can probably find a solution in one of the many wrapper packages around UTL_HTTP on the net (Google "consuming web services from pl/sql", leading you to e.g.
http://www.oracle-base.com/articles/9i/ConsumingWebServices9i.php)
The reason nobody is using UTL_DBWS is that it is not functional in a default installed database. You need to load a ton of Java classes into the database, but the standard instructions seem to be defective - the process spews Java errors right and left and ultimately fails. It seems very few people have been willing to take the time to track down the package dependencies in order to make this approach work.
I had this challenge and found and installed the 'SOAP API' package that Sten suggests on Oracle-Base. It provides some good envelope-creation functionality on top of UTL_HTTP.
However there were some limitations that pertain to your question. SOAP_API assumes all requests are simple XML- i.e. only one layer tag hierarchy.
I extended the SOAP_API package to allow the client code to arbitrarily insert an extra tag. So you can insert a sub-level such as , continue to build the request, and remember to insert a closing tag.
The namespace issue was a bear for the project- different levels of XML had different namespaces.
A nice debugging tool that I used is TCP Trace from Pocket Soap.
www.pocketsoap.com/tcptrace/
You set it up like a proxy and watch the HTTP request and response objects between client and server code.
Having said all that, we really like having a SOAP client in the database- we have full access to all data and existing PLSQL code, can easily loop through cursors and call the external app via SOAP when needed. It was a lot quicker and easier than deploying a middle tier with lots of custom Java or .NET code. Good luck and let me know if you'd like to see my enhanced SOAP API code.
We have also used UTL_HTTP in a manner similar to what you have described. I don't have any direct experience with UTL_DBWS, so I hope you can follow up with any information/experience you can gather.
#kogus, no it's a quite good design for many applications. PL/SQL is a full-fledged programming language that has been used for many big applications.
Check out this older post. I have to agree with that post's #1 answer; it's hard to imagine a scenario where this could be a good design.
Can't you write a service, or standalone application, which would talk to a table in your database? Then you could implement whatever you want as a trigger on that table.