What do you call the process that makes use of an API? - api

At my work the term API is thrown around loosely.
It's often used to describe automated processes composed of batch files, scripts, SQL stored procedures, SQL jobs, Windows tasks, etc.
It confuses my boss and management when I try to talk about an actual API, the interface (i.e., a vendor's protocol for what endpoints to use, how to pass keys, call limits, access token use, expected JSON structure, where to pass particular parameters, how errors should be interpreted, etc.) I tried to explain that this is the more literal definition, and refer to the piece I'd develop as the process that interacts with the API. I feel like I'm only confusing them more though.
Is there a term for the process one develops to interact with an API to automate things? If there's no specific term, how do you refer to it?

At my current position, we tend to call this process “integration”. We are integrating the external or internal API with another backend process or with a front end client application.

Related

What is the difference between performing a procedure on a resource and performing a state transformation?

I'm new to web development and I'm attempting to understand REST. The tutorial I'm watching makes mention of the difference between "procedures" and "state transformation". Stating that REST is based on the notion of "state transformation", but it does not delineate the difference between the two.
This has left me wondering what is the difference between the two? Why can't an operation which transforms the state of a resource also be considered a procedure? After all, 'procedure' sounds like a generic enough term that it would also encompass an operation that would transform the state of a resource.
So, what is the difference between performing a procedure on a resource, and performing a state transformation? Or is it merely a matter of semantics?
I have also tried searching for the answer but can't seem to find anything that will shed light on this.
TL;DR
RPC focues on sending a payload containing method names and arguments in a predefined format. Clients couple tightly to servers through a shared interface (Skeletton classes, WSDL or other interface definition languages (IDLs))
REST focues on decoupling clients from servers and on introducing indirections, like support of multiple different media types to marshal resource state in, and the whole interaction concepts summarized by HATEOAS where hypertext controls are used to drive the application state forward through a domain application protocol / state machine on the server side. Responses usually contain semi-structured data, which usually don't go well with simple CRUD application, that follow the definition of corresponding media type definition (i.e. the HTML spec). If you will the state of a resource is transformed into a representation format adhering to the rules in the media type definition and transferred to the remote side
In network programming, remote procedure call (RPC)-style invocations, i.e. often used in RMI, Corba, SOAP or similar frameworks, will send usually a method name that should be invoked at the server alongside with parameters to feed the method with. The return value is then marshalled into corresponding response and sent back to the caller. What a client could invoke is usually exposed via external stuff, i.e. skeletton classes, WSDL- or other form of contracts and so on. So far, so simple. This is how most of the networking stuff works. However, the drawback here is that a client is tightly coupled to the exposed interface (skeletton classes, WSDL, external documentation) and many problems in internet computing arise due to changes over time that are not adequatly depictable in those interfaces.
If you take a closer look though at how the Web works for decades, change is an inherent part of it. Your browser will just show the most recent state of a resource (Web page) it has. It might eigther got it from its cache or from a server it asked for. If the version available in its cache is older than a predefined threshold value it will ignore the cached value and request a new version. If there happened an update since the last version your browser is automatically served with the new version. Fielding, who was working on the HTTP 1.0 and 1.1 spec back then, analyzed how the interaction on the Web takes place and generalized his findings into the REST architecture design. So, if you will, REST is just Web surfing for applications.
Unfortunately a mojority of enthusiasts and professional have not yet understood what REST really is and there is so much false information available in regards to REST, even here at Stackoverflow most people don't seem to care actually and posts explaining the true nature of REST are downvoted and wrong information upvoted.
So, what does REST different than typical RPC-like method invocations?
First, REST relies on a certain set of uniform interfaces, that are the same for every participant in that architecture. These are i.e. HTTP as transport layer and a naming scheme for resource (URI) so that everyone acts on these fixed principles. This helps to reduce interoperability issues that are just way to common in traditional network programming.
Next, it relies on a basic principle: Servers teach clients what they need to know. But how does a server know what a client need to know? Well, as Jim Webber pointed out, the designer of the application develops a state machine (or domain application protocol) a client will follow through. Think of a checkout system on your favorite online shop. At one point it presents you the items in your trolly and offers you a choice to progress to the next "page" where you can enter the shipping address and on further progressing through the state machine you will be asked for your payment options and so on until at one point to finished the checkout and are served with a "Thank you" page that summarizes your order. Under the hood you just progressed through their protocol on how to place orders and used application controls to progress your client further through their state machine. You therefore got served with some Web forms and links that you used to fulfill your task. In essence, this is what Hypertext as the engine of application state (or HATEOAS for short) is all about.
On the Web HTML forms are used to teach a client about what properties a resource supports, which ones are editable and so on. Besides that, it also teaches clients on the actual URI to send input data to, the HTTP operation to utilze as well as, mostly implicitly given, the media type to marshal the request into. I.e. a regular HTML form will use application/x-www-form-urlencoded as its default media type to send the data to the server. So a full HTTP request for an input of a first and last name may look like this:
POST /path/to/resource HTTP/1.1
Host: acme.org
Connection: close
Accept: */*
User-Agent: ...
Content-Type: application/x-www-form-urlencoded
Content-Length: 32
firstName=Roman&lastName=Vottner
The same data could be sent using a different representation format, if it were supported by the media type the form was issued for. Unfortunately, HTML does not support that many.
Links provided by a server should usually be annotated (or accompanyied) by so called link relation names that put the current resource in relation with the given URI. If you will they are the predicate in a tripple of subject (current page), predicate (link relation name) and object (link target resource). Such names, of course, should be standadized or at least follow the Web linking extension mechanism. URIs itself are opaque, meaning they themselves don't provide meaning and should therefore not get parsed and analyzed at all. A common mistake often seen in so called "REST APIs" is that they have typed resources, i.e. a user resource or a car resource that can be marshalled on the client side to a programming language specific object (i.e. Java object of class User or the like) that is pretty common in traditional RPC-sytle programming. In a REST architecture the representation format however is usually semi-structured data, i.e. a mix of syntax defining control inputs or elements and actual data. As such, a direct mapping from DB-Entry, to Model-Object to a resource itself, as done by so many CRUD applications, is not possible.
Why is this all done in first place?
If you compare traditional network programming a client is usually only able to work with that one server and if something at that server changes clients may be affected and thus stop working. There is a tight coupling between those two apparent. The REST architecture introduces a couple of indirections, i.e. usage of link relations instead of attempting to analyze meaningful URIs as well as usage of a multitude of possible media-types instead of relying on a specified version format, which help to decouple clients from servers. I.e. instead on coupling to the server in regards of the message exchanged, both, client and server couple to media types. Through content-type negotiation a client simply tells the server of its capabilities and the server should generate a response the client can process. Instead of focusing on one message format, REST has the freedom of almost infinite ones as long as both, client and server, support these. The more media types a peer supports, the more likely it will be to interact with other peers in that network.
All these points I've mentioned above lead to a strict decoupling of client and servers, which grant the latter one to evolve freely without having to fear that changes introduce will break clients as neither the transport protocol nor the naming scheme have changed and the changes introdcued are still in scope of the media-type definition. So, well-behaved peers in that network will be able to pick up changes on the fly automatically. This is especially handy if you develop an application that should withstand the sands of time and still server clients in years to come.
If you don't need such properties, there is nothing wrong with not being "RESTful" at all, just don't call such services/APIs REST then. Also, developing REST is for sure more overhead compared to typical RPC-style interactions.

Testing microservices?

I know this question is a little subjective but I am lost on what to do here. At the moment I am using Go + Go-kit to write some microservices. I'd like to test the endpoints of these microservices in an integration test type fashion but I am unsure how to go about it. The only thing I can think of is to have shell scripts that hit the endpoints and check for responses. But this seems like kludge and not a real smart practice. I feel like there should be a better way to do this. Does anyone have any suggestions?
An alternative approach to end-to-end testing is Consumer-Driven Contract (CDC).
Although is useful to have some end-to-end tests, they have some disadvantages like:
the consumer service must know how to start the provider service. This sounds like unnecessary information, likely difficult to maintain when the number of services start ramping up;
starting up a service can be slow. Even if we’re only talking a few seconds, this is adding overhead to build times. If a consumer depends on multiple services, this all starts adding up;
the provider service might depend on a data store or other services to work as expected. It means that now not only the Provider needs to be started but also a few other services, maybe a database.
The idea of CDC is described shortly as:
The consumer defines what it expects from a specific request to a service
The provider and the consumer agree on this contract
The provider continuously verifies that the contract is fulfilled
This information is taken from here. Read more on this article, it can be useful even if it is specific to Java.
You can do this in a standard Go unit test using the httptest package. This allows you to create mock Request and ResponseWriter objects that can be passed to any Handler or HandleFunc. You create the appropriate Request, pass it to your handler, then read the response out of the ResponseRecorder and check it against the expected response.
If you're using the default mux (calling http.Handle() to register handlers) you can test against http.DefaultServeMux. I've used it for microservices in the past with good results. Works for benchmarking handlers, routing, and middleware as well.
You should always use golang's native unit testing framework to test each individual service (please, no shell script!). httptest seems fine, but I would argue it is helpful to have finer-grained test boundaries -- you should really have one _test.go for each functional block of your code. Smaller tests are easier to maintain.
In terms of overall integration tests that involve multiple microservices, you shouldn't do them at development time. Set up a staging area and run the tests over there.
My 2 cents.

Can you use third party .dlls in SAPUI5?

I am redesigning a Reman service, which currently exists as a thick client application that receives SAP Optimization Jobs (from SAP), calculates the best way to optimize product use (Optimizer) and display the best optimization on the client. They can either edit or submit the optimization back to SAP
I am trying to create a SAPUI5 application that either:
Reaches out to an external web server to run a small application (Optimizer) and returns the data back to the UI5 application.
or
Load the third party dll into SAP UI5 and call the Optimizer that way.
Is this possible? Can you use third party dlls in UI5?
SAPUI5 - as the name says - is a UI framework. From your description, I understand that you're trying to pull business/processing logic into the UI. This is usually considered a bad idea. You should rather put the business logic (i. e. your optimizer) into a server-side component (anything that would ideally provide OData services) and use UI5 to create a front-end for that.
It appears that in both solutions you proposed, the business logic is on the server, which is a good practice.
Although it isn't impossible to call a DLL from Javascript, it isn't a very good idea, because there is no possibility to make this browser-independent. There may even be incompatibilities between various versions of the same browser when calling DLLs.
It would by far be the preferred way to call the optimizer webservice from the UI5 application. In fact, UI5 is completely designed to facilitate calling web-services and provides various components that will help you to make the actual call and bind the returned data to user-interface controls.
it is possible as long you have the dll registered in the machine which is running the UI5 Application and you're using JScript for such.

good practice: REST API as the interface between the interface layer and business layer?

I was thinking about the architecture of a web application that I am planning on building and I found myself thinking a lot about a core part of the application. Since I will want to create, for example, an android application to access it, I was already thinking about having an API.
Given the fact that I will want to have an external API to my application from day one, is it a good idea to use that API as an interface between the interface layer (web) and the business layer of my application? This means that even the main interface of my application would access the data through the API. What are the downsides of this approach? performance?
In more general terms, if one is building a web application that is likely to need to be accessed in different ways, is it a good architectural design to have an API (web service) as the interface between the interface layer and business layer? Is REST a good "tool" for that?
Sounds like you've got two questions there, so my answer is in two parts.
Firstly, should you use an API between the interface layer and the business layer? This is certainly a valid approach, one that I'm using in my current project, but you'll have to decide on the benefits yourself, because only you know your project. Possibly the largest factor to consider is whether there will be enough different clients accessing the business layer to justify the extra development effort in developing an API? Often that simply means more than 1 client, as the benefits of having an API will be evident when you come to release changes or bug fixes. Also consider the added complexity, the extra code maintenance overhead and any benefits that might come from separating the interface and business layers such as increased testability.
Secondly, if you implement an API, should you use REST? REST is an architecture, which says as much about how the remainder of your application is developed as it does the API. It's no good defining resources at the API level that don't translate to the Business Layer. Rest tends to be a good approach when you want lots of people to be able to develop against your API (like NetFlix for example). In the case of my current project, we've gone for XML over HTTP, because we don't need the benefits that Rest generally offers (or SOAP for that matter).
In general, the rule of thumb is to implement the simplest solution that works, and without coding yourself into a corner, develop for today's requirements, not tomorrow's.
Chris
You will definitely need need a Web Service layer if you're going to be accessing it from a native client over the Internet.
There are obviously many approaches and solutions to achieve this however I consider the correct architectural guideline to follow is to have a well-defined Service Interface on the Server which is accessed by the Gateway on the client. You would then use POCO DTO's (Plain old DTO's) to communicate between the endpoints. The DTO's main purpose is to provide optimal representation of your web service over the wire, it also allows you to avoid having to deal with serialization as it should be handled transparently by the Client Gateway and Service Interface libraries.
It really depends on how to big your project / app is whether or not you want want to go through the effort to mapping your DTO's to the client and server domain models. For large applications the general approach would be on the client to map your DTO's to your UI Models and have your UI Views bind to that. On the server you would map your DTO's to your domain models and depending on the implementation of the service persist that.
REST is an architectural pattern which for small projects I consider an additional overhead/complexity as it is not as good programattic fit compared to RPC / Document Centric web services. In not so many words the general idea of REST is to develop your services around resources. These resources can have multiple representations which your web service should provide depending on the preferred Content-Type indicated by your HTTP Client (i.e. in the HTTP ACCEPT HEADER). The canonical urls for your web services should also be logically formed (e.g. /customers/reports/1 as opposed to /GetCustomerReports?Id=1) and your web services would ideally return the list of 'valid states your client can enter' with each response. Basically REST is a nice approach promoting a loosely-coupled architecture and re-use however requires more effort to 'adhere' to than standard RPC/Document based web services whose benefits are unlikely to be visible in small projects.
If you're still evaluating what web service technology you should use, you may want to consider using my open source web framework as it is optimized for this task. The DTO's that you use to define your web services interface with can be re-used on the client (which is not normally the case) to provide a strongly-typed interface where all the serialization is taken for you. It also has the added benefit of enabling each web service you create to be called by SOAP 1.1/1.2, XML and JSON web services automatically without any extra configuration so you can choose the most optimal end point for every client scenario, i.e. Native Desktop or Web App, etc.
My recent preference, which is based on J2EE6, is to implement the business logic in session beans and then add SOAP and RESTful web services as needed. It's very simple to add the glue to implement the web services around those session beans. That way I can provide the service that makes the most sense for a particular user application.
We've had good luck doing something like this on a project. Our web services mainly do standard content management, with a high proportion of reads (GET) to writes (PUT, POST, DELETE). So if your logic layer is similar, this is a very reasonable approach to consider.
In one case, we have a video player app on Android (Motorola Droid, Droid 2, Droid X, ...) which is supported by a set of REST web services off in the cloud. These expose a catalog of video on demand content, enable video session setup and tear-down, handle bookmarking, and so on. REST worked out very well for this.
For us one of the key advantages of REST is scalability: since RESTful GET responses may be cached in the HTTP infrastructure, many more clients can be served from the same web application.
But REST doesn't seem to fit some kinds of business logic very well. For instance in one case I wrapped a daily maintenance operation behind a web service API. It wasn't obvious what verb to use, since this operation read data from a remote source, used it to do a lot of creates and updates to a local database, then did deletes of old data, then went off and told an external system to do stuff. So I settled on making this a POST, making this part of the API non-RESTful. Even so, by having a web services layer on top of this operation, we can run the daily script on a timer, run it in response to some external event, and/or have it run as part of a higher level workflow.
Since you're using Android, take a look at the Java Restlet Framework. There's a Restlet edition supporting Android. The director of engineering at Overstock.com raved about it to me a few years ago, and everything he told us was true, it's a phenomenally well-done framework that makes things easy.
Sure, REST could be used for that. But first ask yourself, does it make sense? REST is a tool like any other, and while a good one, not always the best hammer for every nail. The advantage of building this interface RESTfully is that, IMO, it will make it easier in the future to create other uses for this data - maybe something you haven't thought of yet. If you decide to go with a REST API your next question is, what language will it speak? I've found AtomPub to be a great way for processes/applications to exchange info - and it's very extensible so you can add a lot of custom metadata and yet still be eaily parsed with any Atom libraries. Microsoft uses AtomPub in it's cloud [Azure] platform to talk between the data producers and consumers. Just a thought.

How to Test an Undocumented Web Service?

I came across this question recently, could anyone please help me what should be my approach as a tester.
Suppose, there is a webservice whose functionality have been changed and there is no documentation available of the same. What will be your approach to test the same?
Update: Does the same answer hold if Database functionality changed and no documentation.
It seems you might be asking one of two different questions:
1) How to discover the API of a black-box web service.
In this case, the best source would be the source of the web-service (with the existence failure of the documentation), alternatively look at existing clients, or the ?wsdl of the service.
2) How to discover what are correct and incorrect responses from the web service.
For this you need either requirements, or documentation, or correct clients. Probably the most likely to exist in this case is a client. Alternatively the web-service might be implementing some function the results of which can be confirmed externally.
You can't test something with no documentation. How would you know what results to expect?
Maybe you're looking for "documentation" in the wrong place. Somebody made these changes. They had some information telling them what changes to make to the database and to the service. There may even be a requirements document, but maybe also some design documents.
Get those, and use them to figure out what changed. Use that information to decide how to change your tests.
If you are using the service in a useful way, then presumably you have some calls which return some known results, even though this may not be documented. If this is the case then I would write tests which validate my expectations of the service as it is currently. Then at least if changes are made you'll have more chance of knowing which bits have changed that affect you.
Generally speaking, a web service provides a consistent contract between the providing service and callers. It specifies that whilst the back-end implementation might change, the interface for the service will remain consistent.
If you are interested in discovering what functions are available for the service, it may well provide metadata that documents it's available functions and message types. Usually, this is accessible by appending "?wsdl" to the web service URL, although other schemes exist.
Once you have a good idea of the available functions, you can begin to invoke them through your testing framework and evaluating the responses in accordance with your usual test processes.