Create an API blueprint from entity specification - api

I'm building an API and I have modeled the entities I need inside it. By example
User
Name
Email
City
Company
Name
Website
I'm using Blueprint to specify the API itself and I need to create endpoints for CRUD operations in pretty much every entity. The task seems very redundant to me - besides some tuning that is needed in some specific entities, most of the basic skeleton looks like the same.
I wonder if there is any tool that allows me to write down my entities, its fields and types and generates this basic skeleton.
I was about to start creating one and then I stopped to look around if there already is one but I did not find anything yet...

API Blueprint contains a tool to write, use, reuse, compose, inherit your data structures, and it's MSON.
Basically it's a way to describe your data structures within an API Blueprint. We do also provide an html renderer for that, and it's the Attributes Kit. Try also to have a look to its Playground.
You can find an useful tutorial on official website, as well more information.
Hopefully it should be enough to get started.
Cheers,
V.

Related

Difference between API-First and Design-API-First approach?

When I am looking into approach's used for the development of API's, I came across multiple approaches like Code-First, API-First, Design-API-First.
I clearly understand Code-First approach how it is different from other two. But I am not able to get the exact difference between API-First and Design-First approach.
Summary from links:
API First:
API's are considered as first class citizens by the organization.
You design each of your APIs around a contract written in an API
description language like Open API for consistency, reusability, and broad
interoperability.
Design-API-First:
Describing every API design in an iterative way that both humans and computers can understand before you write any code.
API design-first is about the process of creating the API itself.
In design API first approach there will be lot of collaboration in designing of the API.
My understanding by far:
I feel 1 and 2 points of Design-API-First is saying same thing as API First because for example Open API specification is understood by both humans and computers. Is there anything more to it?
So, the only difference there will be collaboration added here by involving stakeholders, developers, customers etc?
So, when we use Design API First, we can say we are also using API-First?
References:
Probably I am able to get the exact context from the following links,
please use them and see if you can get the right understand of it and
address this question.
https://blog.stoplight.io/api-first-vs.-api-design-first-a-comprehensive-guide
https://blog.axway.com/product-insights/amplify-platform/application-integration/api-first-design-api-first
https://www.ecosmob.com/design-first-or-api-first-where-does-future-lies/

Spring Data Rest Without HATEOAS

I really like all the boilerplate code Spring Data Rest writes for you, but I'd rather have just a 'regular?' REST server without all the HATEOAS stuff. The main reason is that I use Dojo Toolkit on the client side, and all of its widgets and stores are set up such that the json returned is just a straight array of items, without all the links and things like that. Does anyone know how to configure this with java config so that I get all the mvc code written for me, but without all the HATEOAS stuff?
After reading Oliver's comment (which I agree with) and you still want to remove HATEOAS from spring boot.
Add this above the declaration of the class containing your main method:
#SpringBootApplication(exclude = RepositoryRestMvcAutoConfiguration.class)
As pointed out by Zack in the comments, you also need to create a controller which exposes the required REST methods (findAll, save, findById, etc).
So you want REST without the things that make up REST? :) I think trying to alter (read: dumb down) a RESTful server to satisfy a poorly designed client library is a bad start to begin with. But here's the rationale for why hypermedia elements are necessary for this kind of tooling (besides the probably familiar general rationale).
Exposing domain objects to the web has always been seen critically by most of the REST community. Mostly for the reason that the boundaries of a domain object are not necessarily the boundaries you want to give your resources. However, frameworks providing scaffolding functionality (Rails, Grails etc.) have become hugely popular in the last couple of years. So Spring Data REST is trying to address that space but at the same time be a good citizen in terms of restfulness.
So if you start with a plain data model in the first place (objects without to many relationships), only want to read them, there's in fact no need for something like Spring Data REST. The Spring controller you need to write is roughly 10 lines of code on top of a Spring Data repository. When things get more challenging the story gets becomes more intersting:
How do you write a client without hard coding URIs (if it did, it wasn't particularly restful)?
How do you handle relationships between resources? How do you let clients create them, update them etc.?
How does the client discover which query resources are available? How does it find out about the parameters to pass etc.?
If your answers to these questions is: "My client doesn't need that / is not capable of doing that.", then Spring Data REST is probably the wrong library to begin with. What you're basically building is JSON over HTTP, but nothing really restful then. This is totally fine if it serves your purpose, but shoehorning a library with clear design constraints into something arbitrary different (albeit apparently similar) that effectively wants to ignore exactly these design aspects is the wrong approach in the first place.

What are the benefits of routers when the URI can be parsed dynamically?

I'm trying to make an architectural decision and I'm worried that I'm missing something big about URL routing / mapping when it comes to designing a basic REST API / framework.
Creating routing classes and such that is typically seen in REST API frameworks, that require one to manually map a URL to a class, and a class method (action), kind of seems like a failure to encapsulate the problem. When this can all be determed by parsing the URL dynamically and having an automatic router or front page controller.
GET https://api.example.com/companies/
Collection resource that gets a list of all companies.
GET https://api.example.com/companies/1
Fetches a single company by ID.
Which all seems to follow the template:https://api.example.com/<controller>/<parameter>/
Benefit 1: URL Decoupling and Abstraction
I assume one of the on paper benefits of having a typical routing class, is that you can decouple or abstract a URL from a resource / physical class. So you could have arbitrary URL's like GET https://api.example.com/poo/ instead of GET https://api.example.com/companies/ that fetches all the companies if you felt like it.
But in almost every example and use-case I've seen, the desire is to have a URL that matches the desired controller, action and parameters, 1 : 1.
Another possible benefit, is that collection resources within a resource, or nested resources, might be easier to achieve with URL mapping and typical routers. For example:
GET https://api.example.com/companies/1/users/
OR
GET https://api.example.com/companies/1/users/1/
Could be quite challenging to come up with a paradigm that can dynamically parse this to know what controller to call in order to get the data, what parameters to use, and where to use them. But I think I have come up with a standard way that could make this work dynamically.
Whereas manually mapping this would be easy.
I could just re-route GET https://api.example.com/companies/1/users/ to the users controller instead of the companies controller, bypassing it, and just set the parameter "1" to be the company id for the WHERE clause.
Benefit 1.1: No Ties to Physical Paths
An addendum to benefit 1, would be that a developer could completely change the URL scheme and folder structure, without affecting the API, because everything is mapped abstractly. If I choose to move files, folders, classes, or rename them, it should just be a matter of changing the mapping / routing.
But still don't really get this benefit either, because even if you had to move your entire API to another location, a trivial change in .htaccess with fix this immediately.
So this:
GET https://api.example.com/companies/
TO
GET https://api.example.com/v1/companies/
Would not impact code, even in the slightest. Even with a dynamic router.
Benefit 2: Control Over What Functionality is Exposed
Another benefit I imagine a typical router class gives you, over a dynamic router that just interprets and parses the URL, is control over exactly what functionality you want to expose to the API consumer. If you just do everything dynamically, you're kind of dropping your pants, automatically giving your consumer access to the entire system.
I see this as a possible benefit for the dynamic router, as you wouldn't then have to manually define and map all your routes to resources. It's all there, automatically. To solve the exposure problem, I would probably do the opposite by defining a blacklist of what functionality the API consumer shouldn't be allowed to use. I might be more time effective, defining a blacklist, then defining each and every usable resource with mapping. Then again, it's riskier too I suppose. You could even do a whitelist... which is similar to a typical router, but you wouldn't need any extended logic at all. It's just a list of URL's that the system would check, before passing the URL to the dynamic router. Or it could just be a private property of the dynamic router class.
Benefit 3: When HTTP Methods Don't Quite Fit the Bill
One case where I see a typical routers shining, is where you need to execute an action, that conflicts with an existing resource. Let me explain.
Say you want to authenticate a user, by running the login function within your user class. But now, you can't execute POST https://api.example.com/users/ with credentials, because that is reserved for adding a new user. Instead, you need to somehow run the login method in your user class. You don't want to use POST https://api.example.com/users/login/ either, because then you're using verbs other than the HTTP methods. However, with a typical router, you can just map this directly, as said before. Easy.
url => "https://api.example.com/tenant/"
Controller => "users"
Action => "login"
Params => "api_key, api_secret"
But, once again, I see an plausible alternative. I could just create another controller, called login or tenant, that instantiates my user controller, and runs the login function. So a consumer could just POST https://api.example.com/tenant/, with credentials, and blam. Authentication.
Although, to get this alternative to work, I would have to physically create another controller, when with a URL mapper, I wouldn't need to. But this seperation of concerns, functionality and resources, is quite nice too. But, maybe that's the main trade off, would you rather just define a URL route, or have to create new classes for each nuance you encounter?
What am I not seeing, or understanding? Am I missing a core concept here and just ignorant? Are there more benefits to having typical URL mapping and routing classes and functionality, that I'm just not aware of, or have I pretty much got this?
A lot of the benefits to routing you describe are correct, and some of what you say about physical mappings is also true. I'd like to throw in some experience / practical information that colored my opinion on routers over the last few years.
first of all, the dynamic parsing of url works well (most of the time) when you architect your application according to the MVC design pattern. For example, I once built a very large application using Kohana, which is a hierarchical MVC framework, which allows you to extend controllers and models for the sake of making nested urls. In general, this makes a lot of sense. But there were a lot of times where it simply didn't make much sense to go build a whole class and model system around the need for one-off URLs to make the application more functional. But there are also times where MVC is not the design pattern you're using, and thus it is not the defining feature of your API, and routing is beautiful in this scenario. One could easily see this issue at work by playing with frameworks that have a lot of structural freedom, such as Slim framework or Express.js.
more often than people think, a fully functional API will have an element of RPC-ness to it in addition to the primarily RESTful endpoints available for consumption. And not only do those additional functionalities make more sense as a consumer when they're decorating existing resource method mappings. This tends to happen after you've built out most of your application and covered most of your bases, and then you realize that there are a couple little features you'd like to add in relation to a resource that isn't doesn't cleanly fit into the CREATE / READ / UPDATE / DELETE categories. you'll know it when you see it.
it really can not be understated, it is much safer to not go hacking on the actual structure of the controllers and models, adding, removing, changing, things for the sole purpose of adding an endpoint that isn't inherently following the same rules of the other controller methods (API endpoints).
another very important thing is that your API endpoints are actually more maleable than we often realize. What I mean is, you can be OK with the structure of your endpoints on monday, and then on friday,you get this task sent down from above saying you need to change all of these API calls to be of some other structure, and thats fine. but if you have a large application, this requires a very, very significant amount of file renaming, class renaming, linkages, and all sorts of very breakable code when the framework you're using has strict rules for class naming, file naming, physical file path structure and stuff like that...just imagine, changing a class name to make it work with the new structure, and now you've got to go hunt down every line of code that instantiated the old class, and change it. Furthermore, in that scenario, it could be said that the problem is that your code is tightly coupled with the url structure of your API, and that is not very maintainable should your url needs change.
Either way, you really ought to decide whats best for the particular application. but the more you use routers, the more you'll see why they're so useful.

Help with design problem (extending a generic inteface)

I am part of a studentproject and we are to develop a product for a company using Java EE. As "lead architect" in the project I am responsible for composing a good design which should be flexible for further extensions.
Background info: We are to develop a website with a drag and drop GUI with possibilites to connect data sources with data manipulations to perform on that specific data. The GUI should be generic and possible to integrate with upcoming products. This means that we cannot code to an implementation in the presentation layer. Instead we will use an interface to define what kind of data manipulations that are possible for all kinds of products. However, each product might also sport product specific data manipulations (thus extending the interface with more methods).
The problem I have with the scenario above is that I dont see how we could pass on these "product specific data manipulations" to the GUI and say that, in addition to the generic interface, we also possess these data manipulation actions...
Now I had a discussion with some of the more experienced programmers from the company and they told me that there is a common solution to this problem - more specifically known as the "Observer pattern". They draw something like [1] on the whiteboard and explained that it would be possible to "register" to a third party (getApplicationContext) that in turn could convey our product specific interface. This is a common problem to get rid of those nasty circular dependencies, they explained.
I have now had a look on the observer pattern and how it works and I still dont really get how I am supposed to solve the design problem. Could someone possibly try to explain how it would turn out in my specific scenario? I have no real problem understanding how it works with "subjects" and "observers".
Here is an UML diagram of the design where we are using a reference of the specific product. This is what is undesirable and something we would like to get around.
(maybe I got this all wrong...)
I am sorry but I cant change the picture to the correct one as I am a new user... Here is a link to an updated UML diagram:
It seems what you are looking for is the Model View Controller design pattern. The Observer pattern is just a part of this design pattern. There is a short description for doing this with Java Servlets and JavaServer Pages from Java EE on the wikipedia article.

How to Design a generic business entity and still be OO?

I am working on a packaged product that is supposed to cater to multiple clients with varying requirements (to a certain degree) and as such should be built in a manner to be flexible enough to be customizable by each specific client. The kind of customization we are talking about here is that different client's may have differing attributes for some of the key business objects. Also, they could have differing business logic tied in with their additional attributes as well
As an very simplistic example: Consider "Automobile" to be a business entity in the system and as such has 4 key attributes i.e. VehicleNumber, YearOfManufacture, Price and Colour.
It is possible that one of the clients using the system adds 2 more attributes to Automobile namely ChassisNumber and EngineCapacity. This client needs some business logic associated with these fields to validate that the same chassisNumber doesnt exist in the system when a new Automobile gets added.
Another client just needs one additional attribute called SaleDate. SaleDate has its own business logic check which validates if the vehicle doesnt exist in some police records as a stolen vehicle when the sale date is entered
Most of my experience has been in mostly making enterprise apps for a single client and I am really struggling to see how I could handle a business entity whose attributes are dynamic and also has a capacity for having dynamic business logic as well in an object oriented paradigm
Key Issues
Are there any general OO principles/patterns that would help me in tackling this kind of design?
I am sure people who have worked on generic / packaged products would have faced similar scenarios in most of them. Any advice / pointers / general guidance is also appreciated.
My technology is .NET 3.5/ C# and the project has a layered architecture with a business layer that consists of business entities that encompass their business logic
This is one of our biggest challenges, as we have multiple clients that all use the same code base, but have widely varying needs. Let me share our evolution story with you:
Our company started out with a single client, and as we began to get other clients, you'd start seeing things like this in the code:
if(clientName == "ABC") {
// do it the way ABC client likes
} else {
// do it the way most clients like.
}
Eventually we got wise to the fact that this makes really ugly and unmanageable code. If another client wanted theirs to behave like ABC's in one place and CBA's in another place, we were stuck. So instead, we turned to a .properties file with a bunch of configuration points.
if((bool)configProps.get("LastNameFirst")) {
// output the last name first
} else {
// output the first name first
}
This was an improvement, but still very clunky. "Magic strings" abounded. There was no real organization or documentation around the various properties. Many of the properties depended on other properties and wouldn't do anything (or would even break something!) if not used in the right combinations. Much (possibly even most) of our time in some iterations was spent fixing bugs that arose because we had "fixed" something for one client that broke another client's configuration. When we got a new client, we would just start with the properties file of another client that had the configuration "most like" the one this client wanted, and then try to tweak things until they looked right.
We tried using various techniques to get these configuration points to be less clunky, but only made moderate progress:
if(userDisplayConfigBean.showLastNameFirst())) {
// output the last name first
} else {
// output the first name first
}
There were a few projects to get these configurations under control. One involved writing an XML-based view engine so that we could better customize the displays for each client.
<client name="ABC">
<field name="last_name" />
<field name="first_name" />
</client>
Another project involved writing a configuration management system to consolidate our configuration code, enforce that each configuration point was well documented, allow super users to change the configuration values at run-time, and allow the code to validate each change to avoid getting an invalid combination of configuration values.
These various changes definitely made life a lot easier with each new client, but most of them failed to address the root of our problems. The change that really benefited us most was when we stopped looking at our product as a series of fixes to make something work for one more client, and we started looking at our product as a "product." When a client asked for a new feature, we started to carefully consider questions like:
How many other clients would be able to use this feature, either now or in the future?
Can it be implemented in a way that doesn't make our code less manageable?
Could we implement a different feature that what they are asking for, which would still meet their needs while being more suited to reuse by other clients?
When implementing a feature, we would take the long view. Rather than creating a new database field that would only be used by one client, we might create a whole new table which could allow any client to define any number of custom fields. It would take more work up-front, but we could allow each client to customize their own product with a great degree of flexibility, without requiring a programmer to change any code.
That said, sometimes there are certain customizations that you can't really accomplish without investing an enormous effort in complex Rules engines and so forth. When you just need to make it work one way for one client and another way for another client, I've found that your best bet is to program to interfaces and leverage dependency injection. If you follow "SOLID" principles to make sure your code is written modularly with good "separation of concerns," etc., it isn't nearly as painful to change the implementation of a particular part of your code for a particular client:
public FirstLastNameGenerator : INameDisplayGenerator
{
IPersonRepository _personRepository;
public FirstLastNameGenerator(IPersonRepository personRepository)
{
_personRepository = personRepository;
}
public string GenerateDisplayNameForPerson(int personId)
{
Person person = _personRepository.GetById(personId);
return person.FirstName + " " + person.LastName;
}
}
public AbcModule : NinjectModule
{
public override void Load()
{
Rebind<INameDisplayGenerator>().To<FirstLastNameGenerator>();
}
}
This approach is enhanced by the other techniques I mentioned earlier. For example, I didn't write an AbcNameGenerator because maybe other clients will want similar behavior in their programs. But using this approach you can fairly easily define modules that override default settings for specific clients, in a way that is very flexible and extensible.
Because systems like this are inherently fragile, it is also important to focus heavily on automated testing: Unit tests for individual classes, integration tests to make sure (for example) that your injection bindings are all working correctly, and system tests to make sure everything works together without regressing.
PS: I use "we" throughout this story, even though I wasn't actually working at the company for much of its history.
PPS: Pardon the mixture of C# and Java.
That's a Dynamic Object Model or Adaptive Object Model you're building. And of course, when customers start adding behaviour and data, they are programming, so you need to have version control, tests, release, namespace/context and rights management for that.
A way of approaching this is to use a meta-layer, or reflection, or both. In addition you will need to provide a customisation application which will allow modification, by the users, of your business logic layer. Such a meta-layer does not really fit in your layered architecture - it is more like a layer orthoganal to your existing architecture, though the running application will probably need to refer to it, at least on initialisation. This type of facility is probably one of the fastest ways of screwing up the production application known to man, so you must:
Ensure that the access to this editor is limited to people with a high level of rights on the system (eg administrator).
Provide a sandbox area for the customer modifications to be tested before any changes they are testing are put on the production system.
An "OOPS" facility whereby they can revert their production system to either your provided initial default, or to the last revision before the change.
Your meta-layer must be very tightly specified so that the range of activities is closely defined - George Orwell's "What is not specifically allowed, is forbidden."
Your meta-layer will have objects in it such as Business Object, Method, Property and events such as Add Business Object, Call Method etc.
There is a wealth of information about meta-programming available on the web, but I would start with Pattern Languages of Program Design Vol 2 or any of the WWW resources related to, or emanating from Kent or Coplien.
We develop an SDK that does something like this. We chose COM for our core because we were far more comfortable with it than with low-level .NET, but no doubt you could do it all natively in .NET.
The basic architecture is something like this: Types are described in a COM type library. All types derive from a root type called Object. A COM DLL implements this root Object type and provides generic access to derived types' properties via IDispatch. This DLL is wrapped in a .NET PIA assembly because we anticipate that most developers will prefer to work in .NET. The Object type has a factory method to create objects of any type in the model.
Our product is at version 1 and we haven't implemented methods yet - in this version business logic must be coded into the client application. But our general vision is that methods will be written by the developer in his language of choice, compiled to .NET assemblies or COM DLLs (and maybe Java too) and exposed via IDispatch. Then the same IDispatch implementation in our root Object type can call them.
If you anticipate that most of the custom business logic will be validation (such as checking for duplicate chassis numbers) then you could implement some general events on your root Object type (assuming you did it something like the way we do.) Our Object type fires an event whenever a property is updated, and I suppose this could be augmented by a validation method that gets called automatically if one is defined.
It takes a lot of work to create a generic system like this, but the payoff is that application development on top of the SDK is very quick.
You say that your customers should be able to add custom properties and implement business logic themselves "without programming". If your system also implements data storage based on the types (ours does) then the customer could add properties without programming, by editing the model (we provide a GUI model editor.) You could even provide a generic user application that dynamically presents the appropriate data-entry controls depending on the types, so your customers could capture custom data without additional programming. (We provide a generic client application but it's more a developer tool than a viable end-user application.) I don't see how you could allow your customers to implement custom logic without programming... unless you want to provide some kind of drag-n-drop GUI workflow builder... surely a huge task.
We don't envisage business users doing any of this stuff. In our development model all customisation is done by a developer, but not necessarily an expensive one - part of our vision is to allow less experienced developers produce robust business applications.
Design a core model that acts as its own independent project
Here's a list of some possible basic requirements...
The core design would contain:
classes that work (and possibly be extended) in all of the subprojects.
more complex tools like database interactions (unless those are project specific)
a general configuration structure that should be considered standard across all projects
Then, all of the subsequent projects that are customized per client are considered extensions of this core project.
What you're describing is the basic purpose of any Framework. Namely, create a core set of functionality that can be set apart from the whole so you don't have to duplicate that development effort in every project you create. Ie, drop in a framework and half your work is done already.
You might say, "what about the SCM (Software Configuration Management)?"
How do you track revision history of all of the subprojects without including the core into the subproject repository?
Fortunately, this is an old problem. Many software projects, especially those in the the linux/open source world, make extensive use of external libraries and plugins.
In fact git has a command that's specifically used to import one project repository into another as a sub-repository (preserving all of the sub-repository's revision history etc). In fact, you can't modify the contents of the sub-repository because the project won't track it's history at all.
The command I'm talking about is called 'git submodule'.
You may ask, "what if I develop a really cool feature in one client's project that I'd like to use in all of my client's projects?".
Just add that feature to the core and run a 'git submodule sync' on all the other projects. The way git submodule works is, it points to a specific commit within the sub-repository's history tree. So, when that tree is changed upstream, you need to pull those changes back downstream to the projects where they're used.
The structure to implement such a thing would work like this. Lets say that you software is written specifically to manage a car dealership (inventory, sales, employees, customers, orders, etc...). You create a core module that covers all of these features because they are expected to be used in the software for all of your clients.
But, you have recently gained a new client who wants to be more tech savvy by adding online sales to their dealership. Of course, their website is designed by a separate team of web developers/designers and webmaster but they want a web API (Ie, service layer) to tap into the current infrastructure for their website.
What you'd do is create a project for the client, we'll call it WebDealersRUs and link the core submodule into the repository.
The hidden benefit of this is, once you start to look as a codebase as pluggable parts, you can start to design them from the start as modular pieces that are capable of being dropped in to a project with very little effort.
Consider the example above. Lets say that your client base is starting to see the merits of adding a web-front to increase sales. Just pull the web API out of the WebDealersRUs into its own repository and link it back in as a submodule. Then propagate to all of your clients that want it.
What you get is a major payoff with minimal effort.
Of course there will always be parts of every project that are client specific (branding, ect). That's why every client should have a separate repository containing their unique version of the software. But that doesn't mean that you can't pull parts out and generalize them to be reused in subsequent projects.
While I approach this issue from the macro level, it can be applied to smaller/more specific parts of the codebase. The key here is code that you wish to re-use needs to be genericized.
OOP comes into play here because: where the functionality is implemented in the core but extended in client's code you'll use a base class and inherit from it; where the functionality is expected to return a similar type of result but the implementations of that functionality may be wildly different across classes (Ie, there's no direct inheritance hierarchy) it's best to use an interface to enforce that relationship.
I know your question is general, not tied to a technology, but since you mention you actually work with .NET, I suggest you look at a new and very important technology piece that is part of .NET 4: the 'dynamic' type.
There is also a good article on CodeProject here: DynamicObjects – Duck-Typing in .NET.
It's probably worth to look at, because, if I have to implement the dynamic system you describe, I would certainly try to implement my entities based on the DynamicObject class and add custom properties and methods using the TryGetxxx methods. It also depends whether you are focused on compile time or runtime. Here is an interesting link here on SO: Dynamically adding members to a dynamic object on this subject.
Two approaches is what I feel:
1) If different clients fall on to same domain (as Manufacturing/Finance) then it's better to design objects in such a way that BaseObject should have attributes which are very common and other's which could vary in between clients as key-value pairs. On top of it, try to implement rule engine like IBM ILog(http://www-01.ibm.com/software/integration/business-rule-management/rulesnet-family/about/).
2) Predictive Model Markup Language(http://en.wikipedia.org/wiki/PMML)