One of the crieria we are using when deciding between using REST and SOAP (RPC) style services is not just programmer convenience, but which is easier to test using Automated test tools. It seems there are lots of good tools for both, but is one inherently easier?
RESTful APIs are usually simpler, and built on top of well developed technologies that are easy enough to test (specifically, HTTP), so I would say RESTful APIs may be simpler to test. You can even do most of your testing with just a web browser, so I think that's pretty simple.
Related
What is the difference between swagger-api and JAX-RS ?
Is the swagger-api only for documentation? (for example #ApiOperation)
As per the API docs, JAX-RS is the Java API for RESTful Web Services that provides portable APIs for developing, exposing and accessing Web applications designed and implemented in compliance with principles of REST architectural style.
Swagger, on the other hand, comes in picture when you have implemented your restful web services using any of the JAX-RS implementations (Jersey, RestEasy, Apache-CXF, etc as already mentioned by #Bijoy). Swagger adds form to your APIs by making them look good and presentable so that client code can be written easily, at the same time it also makes documentation a much less boring task by integrating it with code. Needless to say, also saves the extra time it takes to document if done after coding is over. In this sense, it is a bit revolutionary.
jax-rs is REST specification, and it is implemented by ones like jersey, resteasy etc, swagger is more on documenting and it has an easy interface if you want to test and make lot easier from different platform for adapt your rest functionalities
I am currently developing a very simple web service and thought I could write an API for that so when I decide to expand it on new platforms I would only have to code the parser application. That said, the API isn't meant for other developers but me, but I won't restrict access to it so anyone can build on that.
Then I thought I could even run the website itself through this API for various reasons like lower bandwidth consumption (HTML generated in browser) and client-side caching. Being AJAX heavy seemed like an even bigger reason to.
The layout looks like this:
Server (database, programming logic)
|
API (handles user reads/writes)
|
Client application (the website, browser extensions, desktop app, mobile apps)
|
Client cache (further reduces server reads)
After the introduction here are my questions:
Is this good use of API
Is it a good idea to run the whole website through the API
What choices for safe authentication do I have, using the API (and for some reason I prefer not to use HTTPS)
EDIT
Additional questions:
Any alternative approaches I haven't considered
What are some potential issues I haven't accounted for that may arise using this approach
First things first.
Asking if a design (or in fact anything) is "good" depends on how you define "goodness". Typical criteria are performance, maintainability, scalability, testability, reusability etc. It would help if you could add some of that context.
Having said that...
Is this good use of API
It's usually a good idea to separate out your business logic from your presentation logic and your data persistence logic. Your design does that, and therefore I'd be happy to call it "good". You might look at a formal design pattern to do this - Model View Controller is probably the current default, esp. for web applications.
Is it a good idea to run the whole website through the API
Well, that depends on the application. It's totally possible to write an application entirely in Javascript/Ajax, but there are browser compatibility issues (esp. for older browsers), and you have to build support for things users commonly expect from web applications, like deep links and search engine friendliness. If you have a well-factored API, you can do some of the page generation on the server, if that makes it easier.
What choices for safe authentication do I have, using the API (and for some reason I prefer not to use HTTPS)
Tricky one - with this kind of app, you have to distinguish between authenticating the user, and authenticating the application. For the former, OpenID or OAuth are probably the dominant solutions; for the latter, have a look at how Google requires you to sign up to use their Maps API.
In most web applications, HTTPS is not used for authentication (proving the current user is who they say they are), but for encryption. The two are related, but by no means equivalent...
Any alternative approaches I haven't considered
Maybe this fits more under question 5 - but in my experience, API design is a rather esoteric skill - it's hard for an API designer to be able to predict exactly what the client of the API is going to need. I would seriously consider writing the application without an API for your first client platform, and factor out the API later - that way, you build only what you need in the first release.
What are some potential issues I haven't accounted for that may arise using this approach
Versioning is a big deal with APIs - once you've created an interface, you can almost never change it, especially with multiple clients that you don't control. I'd build versioning in as a first class concept - with RESTful APIs, you can do this as part of the URL.
Is this good use of API
Depends on what you will do with that application.
Is it a good idea to run the whole website through the API
no, so your site will be accessible only through your application. this way This implementation prevents compatibility with other browsers
What choices for safe authentication do I have, using the API (and for some reason I prefer not to use HTTPS)
You can use omniauth
Any alternative approaches I haven't considered
create both frontends, one in your application and other in common browsers
What are some potential issues I haven't accounted for that may arise using this approach
I don't now your idea, but I can't see major danger.
Most of the applications these days provide an API...be it twitter,gmail,fb and millions others.
I understand API Design can not be explained in just an answer but I would like some suggestions on how to get started with API design. Maybe some tutorial/book that makes an application and has some chapters on how to go about providing API's for it. I'm mostly a java developer (learning Groovy) but am open to other languages also, if it is easier to get started with API design in that language.
As a side note, before I was curious about the difference between an API and a webservice. But now as I understand it, webservice is just a form of an API
I don't have any great resources however, I want to stress how correct that API is Application Programing Interface, and is simply a mechanism for how you expose your application to be consumed by others. Be it from script, web service (soap or rest), Win32 API Style Calls....
About 10 years ago when we talked API it seemed like everyone felt like all APIs were like Win32, and that was it. One of the more interesting I've worked on was an API with a PICK based Management System. In this case we wrote an XML processor in PICK and were screen scraping XML back and forth over a telnet session.
The first thing you need to decide, is how do you want to expose your data. Are you going to expose over the web? Or is your application a desktop application? How I would structure an API for cross machine communication tends to be different then if the API is running in a single process or even on a single machine.
I would also start by writting a test client, You have to understand how your API will be used first and try to make it as simple as possible. If you dive right in with the implementation you might loose perspective and make assumptions that a client developer might not.
I'm just about getting into WCF; but from what I've read so far, like the sample scenarios I found on MSDN and some other sites, I can do all that with web services and applications that call those web services. So why the need for an elaborate layer like WCF?
Most of the comparisons I've googled for explain it more from a programming point of view. Still trying to find answers without much success as to when it makes business and of course programming sense to use the WCF layer as opposed to traditional application to web services model.
Anyone here with experience on both and can advice on how to go about choosing either web services or going the WCF way? What are those things that can't absolutely be done using just plain old web services called by applications and where the WCF layer will save the day.
You've fallen for the Microsoft trap of "it's just about web services" :-)
It's actually a lot more:
it's about service-oriented programming in general (not just web services - you can also write TCP/IP based services, MSMQ queue-based messaging and a lot more)
it's about unifying all the diverse programming models that existed so far (ASMX, Enterprise Services, DCOM, .NET remoting)
it's about providing a lot of ready-made and ready-to-use plumbing which can handle things like reliable messaging, transaction support, security in any shape or form you'd like, service discovery, and a lot more
it's about separating the service implementation from the details of how clients will call it and making this a configurable stack of protocols, encodings etc.
Sure - most of this stuff can be done in ASMX, or .NET remoting - but try to convert an ASMX web service to be callable in your intranet using TCP/IP and transport security... Many of those "older" technologies have a very intricate and direct link to how they're being used - you can't easily change that without changing the whole service code.
WCF separates all these "plumbing details" like what endpoint to call, what protocol to use to call it, how to handle security etc. out into a "WCF stack" that's configurable and composable, so you can easily switch your service XYZ to use HTTP allowing anonymous users to call it, to using TCP/IP with Windows credentials required - your service code won't change a bit - it's only configuration of the plumbing.
That to me is the most compelling reason for WCF - I can totally concentrate on my actual service code, and not pollute it with lots of plumbing stuff - how to handle transports and text encodings and all that. And I can easily change that and adapt to new requirements and needs in deployment without having to touch my actual service code.
Plus, the second major point is extensibility - most of the older technologies just had their one, set way of doing things and many didn't lend themselves to being extended. You had to either adapt to use it the way they did it - or forget about it. WCF has a vast and very intricate system for extending just about anything - you can create your own transport protocol (people have created UDP or SMTP based bindings), you can create your own message encoders (like I had to do to talk to a web service which could only understand ISO-8859-1 encoded messages), and you can extend just about anything else in WCF - all in an organized, well-documented, very stable and safe way.
So these two things - separating out plumbing into configurable layers, and extensibility to the maximum - are the most compelling reasons for me to use WCF.
Edit: Kobi's link above, is a far better answer than mine.
WCF is basically a better architecture for supporting communications. It breaks many dependencies such as hosting (not iis dependent), transport, security, addressing into plugin components, and allows customisation to a very high degree.
Yes you can do a lot with traditional technologies, however you can do more with WCF. If you don't need the features now then of course you can can continue with legacy technologies, however if you prefer you can opt for a better architecture now with an eye on the future but it comes at a cost of having to switch technologies now.
Take this example. If you have a legacy asmx web service, how easily can you offer the same service via an MSMQed endpoint? With WCF its as simple as adding new config settings.
I assume that you are not asking "why not just stick with SOAP/HTTP". WSF allows you to choose a number of different transports rather than just simple HTTP, but as you observe the WS-* technologies allow you to do all that. So I think you're asking why use a powerful but complex framework when the raw technolgies are not impossibly complex?
You could ask this same question of any Framework. You could just use the basic technologies and avoid the learning curve of adopting the framework.
Frameworks such as WCF do have a learning curve, but consider what happens if you don't use them:
You find that you write boiler-plate code for each service invocation. You then either accept duplication or begin to refactor and build your own libraries. Before long you've developed your own Framework, but it's not the same as anybody else's. So then any new team memeber has to learn your local framework, serious learning currve.
Note also that WCF addresses issues such as the monitoring of the deployed solution.
The biggest appeal to me is testability. Services are defined by a CLR interface, which is quite easy to mock inside a test harness. Some words of warning, however. With great flexibility comes some pain in the configuration process, along with a few "gotchas". An example of a gotcha is that WCF--adhering closely to a "best practice"--requires an active SSL connection in order to pass SOAP authentication credentials over HTTP. This hinders testing quite a bit.
We are developing a middleware SDK, both in C++ and Java to be used as a library/DLL by, for example, game developers, animation software developers, Avatar developers to enhance their products.
Having created a typical API using specific calls for specific functions I am considering simplifying the API by using a REST type API (GET, PUT, POST, DELETE) or CRUD type (CREATE, READ, UPDATE, DELETE) interface.
This would work in a similar way to a client-server type REST API where there are only 4 possible API calls but these can take flexible parameters.
This seems to have the benefit of making the API stable in that new calls are not being added and old calls are not being removed. So a consumer of this API need not worry about having to recompile and change their code to suit any updates to our middleware.
The overhead is that there is an extra layer of redirection in the middleware controller to route API calls and the developer needs to know what parameters are available for each REST call (supplied of course).
I have not so far seen this system used outside of web type client server applications so my question is this: Is this a feasible idea?
I am thinking in terms of its efficiency as well as if for example a game developer would find it easy to use.
Yes, this is a feasible idea. But I'm not sure the benefits would justify the costs. REST is best applied to a networked application scenario, oriented around requests and responses. While there are definite learning curve advantages to a uniform interface, those advantages can be present in almost any well-designed API which provides reasonably abstract procedures.
You also expressed concern for whether a game developer would find a RESTful API easy to use. I'd be dubious. I've implemented many RESTful web services, and helped many developers get up to speed both building them and using them, and the conceptual leap required to grasp REST can be substantial for someone who has been steeped in procedural APIs for years. I'd think that game developers in particular would be very strongly connected to procedural APIs, to the point that attempting to adopt a different paradigm, whatever its benefits, might prove extremely difficult.
Remember that REST is not specific to HTTP, and does not rely on just the 4 HTTP verbs. The verbs you have and can use depend on what protocol you're using.