Programmatic interface for CasperJS - phantomjs

I plan on developing a CasperJS component which will be part of a larger system.
The CasperJS component will need to have a programmatic interface so that I can pass in data to be used and retrieve results.
I see that PhantomJS has a WebServer module. Is it possible to implement a REST API with that module?
I see that the WebServer module only handles something like 10 concurrent requests. Can I increase this or create some kind of multi process load balanced solution for higher throughput.
EDIT :: Apologies for the broad question. More specifically, how can I implement a REST API to communicate with CasperJS?

Related

Can you use third party .dlls in SAPUI5?

I am redesigning a Reman service, which currently exists as a thick client application that receives SAP Optimization Jobs (from SAP), calculates the best way to optimize product use (Optimizer) and display the best optimization on the client. They can either edit or submit the optimization back to SAP
I am trying to create a SAPUI5 application that either:
Reaches out to an external web server to run a small application (Optimizer) and returns the data back to the UI5 application.
or
Load the third party dll into SAP UI5 and call the Optimizer that way.
Is this possible? Can you use third party dlls in UI5?
SAPUI5 - as the name says - is a UI framework. From your description, I understand that you're trying to pull business/processing logic into the UI. This is usually considered a bad idea. You should rather put the business logic (i. e. your optimizer) into a server-side component (anything that would ideally provide OData services) and use UI5 to create a front-end for that.
It appears that in both solutions you proposed, the business logic is on the server, which is a good practice.
Although it isn't impossible to call a DLL from Javascript, it isn't a very good idea, because there is no possibility to make this browser-independent. There may even be incompatibilities between various versions of the same browser when calling DLLs.
It would by far be the preferred way to call the optimizer webservice from the UI5 application. In fact, UI5 is completely designed to facilitate calling web-services and provides various components that will help you to make the actual call and bind the returned data to user-interface controls.
it is possible as long you have the dll registered in the machine which is running the UI5 Application and you're using JScript for such.

Is testing a service a functional test or an integration test

We have a .Net component that provides functionality. We have a Restful Web API service that the world will use to call that functionality. We wrote tests that use OWIN to call into our Web API controllers.
In the past, I had always called these "integration tests", because the service is a separate component. However, another developer, who I respect, told me that this is not an integration test, but is instead a "functional test".
I looked at the Defintion of Functional Testing and the definition of Definition of Integration Test and neither one was a clear winner to me for what a test of a RESTful service test should be called.
Is it integration or functional? Are there any authoritative sources that can be used to definitively answer this question (because I don't want my question to be closed for "soliciting debate")?
Functional testing is a black box exercise, as stated in the Functional testing definition link you posted. This means that the testing occurs without knowledge of the internal workings of the system (such as the code). You would be interested in whether the API works as expected end-to-end; does the user receive the correct response when they use the API? If the REST API is being tested outside of your system then it is a functional test.
If the API is being tested from your code base then this is testing the integration between two or more modules and thus classifies as an integration test. This might involve testing whether a dependent module is sending the correct data, receiving the expected data and in the correct format.
For more information check out anything from ISTQB which is a recognised body for software testing. Here is a link to an ISTQB glossary: http://www.software-tester.ch/PDF-Files/ISTQB%20Glossary%20of%20Testing%20Terms%202.4.pdf
When you call a rest service to verify that the service returns what it was designed to return, that is a functional test. You are testing the functionality of the service.
If you had a second service or UI that depended on this service, and your tests interacted with this second service or UI to verify that it can properly call the REST API and consume it's data, that would be an integration test.

Where to put calls to 3rd party APIs in Apigility/ZF2?

I have just completed my first API in Apigility. Right now it is basically a gateway to a database, storing and retrieving multi-page documents uploaded through an app.
Now I want to run some processing on the documents, e.g. process them through a 3rd party API or modify the image quality etc., and return them to the app users.
Where (in which class) do I generally put such logic? My first reflex would be to implement such logic in the Resource-Classes. However I feel that they will become quite messy, obstructing a clear view on the API's interface in the code and creating a dependency on a foreign API. Also, I feel limited because each method corresponds to an API call.
What if there is a certain processing/computing time? Meaning I cannot direct respond the result through a GET request. I thought about running an asynchronous process and send a push notification to the app, once the processing is complete. But again, where in the could would I ideally implement such processing logic?
I would be very happy to receive some architectural advice from someone who is more seasoned in developing APIs. Thank you.
You are able to use the zf-rest resource events to connect listeners with your additional custom logic without polluting your resources.
These events are fired in the RestController class (for example a post.create event here on line 382).
When you use Apigility-Doctrine module you can also use the events triggered in the DoctrineResource class (for example the DoctrineResourceEvent::EVENT_CREATE_POST event here on line 361) to connect your listeners.
You can use a queueing service like ZendQueue or something (a third party module) built on top of ZendQueue for managing that. You can find different ZF2 queueing systems/modules using Google.
By injecting the queueing service into your listener you can simply push your jobs directly into your queue.

Ember adapter and serializer

I'm building an Ember application with ember-cli and, as a persistence layer, an HTTP API using rails-api + Grape + ActiveModelSerializer. I am at a very basic stage but I want to setup my front-end and back-end in as much standard and clean way as possible before going on with developing further API and ember models.
I could not find a comprensive guide about serialization and deserialization made by the store but I read the documentation about DS.ActiveModelSerializer and DS.ActiveModelAdapter (which says the same things!) along with their parent classes.
What are the exact roles of adapter and serializer and how are they related?
Considering the tools I am using do I need to implement both of them?
Both Grape/ActiveModelSerializer and EmberData offer customization. As my back-end and front-end are for each other and not for anything else which side is it better to customize?
Hmmm...which side is better is subjective, so this is sort of my thought process:
Generally speaking, one would want an API that is able to "talk to anything" in case a device client is required or in case the API gets to be consumed by other parties in the future, so that would suggest that you'd config your Ember App to talk to your backend. But again, I think this is a subjective question/answer 'cause no one but you and your team can tell what's good for a given scenario you are or might be experiencing while the app gets created.
I think the guides explain the Adapter and Serializer role/usage and customization pretty decently these days.
As for implementing them, it may be necessary to create an adapter for your application to define a global namespace if you have one (if your controllers are behind another area like localhost:3000/api/products, then set namespace: 'api' otherwise this is not necessary), or similarly the host if you're using cors, and if you're doing with cli you might want to set the security policy in the environment to allow connections to other domains for cors and stuff like that. This can be done per model as well. But again, all of this is subjective as it depends on what you want/need to achieve.

good practice: REST API as the interface between the interface layer and business layer?

I was thinking about the architecture of a web application that I am planning on building and I found myself thinking a lot about a core part of the application. Since I will want to create, for example, an android application to access it, I was already thinking about having an API.
Given the fact that I will want to have an external API to my application from day one, is it a good idea to use that API as an interface between the interface layer (web) and the business layer of my application? This means that even the main interface of my application would access the data through the API. What are the downsides of this approach? performance?
In more general terms, if one is building a web application that is likely to need to be accessed in different ways, is it a good architectural design to have an API (web service) as the interface between the interface layer and business layer? Is REST a good "tool" for that?
Sounds like you've got two questions there, so my answer is in two parts.
Firstly, should you use an API between the interface layer and the business layer? This is certainly a valid approach, one that I'm using in my current project, but you'll have to decide on the benefits yourself, because only you know your project. Possibly the largest factor to consider is whether there will be enough different clients accessing the business layer to justify the extra development effort in developing an API? Often that simply means more than 1 client, as the benefits of having an API will be evident when you come to release changes or bug fixes. Also consider the added complexity, the extra code maintenance overhead and any benefits that might come from separating the interface and business layers such as increased testability.
Secondly, if you implement an API, should you use REST? REST is an architecture, which says as much about how the remainder of your application is developed as it does the API. It's no good defining resources at the API level that don't translate to the Business Layer. Rest tends to be a good approach when you want lots of people to be able to develop against your API (like NetFlix for example). In the case of my current project, we've gone for XML over HTTP, because we don't need the benefits that Rest generally offers (or SOAP for that matter).
In general, the rule of thumb is to implement the simplest solution that works, and without coding yourself into a corner, develop for today's requirements, not tomorrow's.
Chris
You will definitely need need a Web Service layer if you're going to be accessing it from a native client over the Internet.
There are obviously many approaches and solutions to achieve this however I consider the correct architectural guideline to follow is to have a well-defined Service Interface on the Server which is accessed by the Gateway on the client. You would then use POCO DTO's (Plain old DTO's) to communicate between the endpoints. The DTO's main purpose is to provide optimal representation of your web service over the wire, it also allows you to avoid having to deal with serialization as it should be handled transparently by the Client Gateway and Service Interface libraries.
It really depends on how to big your project / app is whether or not you want want to go through the effort to mapping your DTO's to the client and server domain models. For large applications the general approach would be on the client to map your DTO's to your UI Models and have your UI Views bind to that. On the server you would map your DTO's to your domain models and depending on the implementation of the service persist that.
REST is an architectural pattern which for small projects I consider an additional overhead/complexity as it is not as good programattic fit compared to RPC / Document Centric web services. In not so many words the general idea of REST is to develop your services around resources. These resources can have multiple representations which your web service should provide depending on the preferred Content-Type indicated by your HTTP Client (i.e. in the HTTP ACCEPT HEADER). The canonical urls for your web services should also be logically formed (e.g. /customers/reports/1 as opposed to /GetCustomerReports?Id=1) and your web services would ideally return the list of 'valid states your client can enter' with each response. Basically REST is a nice approach promoting a loosely-coupled architecture and re-use however requires more effort to 'adhere' to than standard RPC/Document based web services whose benefits are unlikely to be visible in small projects.
If you're still evaluating what web service technology you should use, you may want to consider using my open source web framework as it is optimized for this task. The DTO's that you use to define your web services interface with can be re-used on the client (which is not normally the case) to provide a strongly-typed interface where all the serialization is taken for you. It also has the added benefit of enabling each web service you create to be called by SOAP 1.1/1.2, XML and JSON web services automatically without any extra configuration so you can choose the most optimal end point for every client scenario, i.e. Native Desktop or Web App, etc.
My recent preference, which is based on J2EE6, is to implement the business logic in session beans and then add SOAP and RESTful web services as needed. It's very simple to add the glue to implement the web services around those session beans. That way I can provide the service that makes the most sense for a particular user application.
We've had good luck doing something like this on a project. Our web services mainly do standard content management, with a high proportion of reads (GET) to writes (PUT, POST, DELETE). So if your logic layer is similar, this is a very reasonable approach to consider.
In one case, we have a video player app on Android (Motorola Droid, Droid 2, Droid X, ...) which is supported by a set of REST web services off in the cloud. These expose a catalog of video on demand content, enable video session setup and tear-down, handle bookmarking, and so on. REST worked out very well for this.
For us one of the key advantages of REST is scalability: since RESTful GET responses may be cached in the HTTP infrastructure, many more clients can be served from the same web application.
But REST doesn't seem to fit some kinds of business logic very well. For instance in one case I wrapped a daily maintenance operation behind a web service API. It wasn't obvious what verb to use, since this operation read data from a remote source, used it to do a lot of creates and updates to a local database, then did deletes of old data, then went off and told an external system to do stuff. So I settled on making this a POST, making this part of the API non-RESTful. Even so, by having a web services layer on top of this operation, we can run the daily script on a timer, run it in response to some external event, and/or have it run as part of a higher level workflow.
Since you're using Android, take a look at the Java Restlet Framework. There's a Restlet edition supporting Android. The director of engineering at Overstock.com raved about it to me a few years ago, and everything he told us was true, it's a phenomenally well-done framework that makes things easy.
Sure, REST could be used for that. But first ask yourself, does it make sense? REST is a tool like any other, and while a good one, not always the best hammer for every nail. The advantage of building this interface RESTfully is that, IMO, it will make it easier in the future to create other uses for this data - maybe something you haven't thought of yet. If you decide to go with a REST API your next question is, what language will it speak? I've found AtomPub to be a great way for processes/applications to exchange info - and it's very extensible so you can add a lot of custom metadata and yet still be eaily parsed with any Atom libraries. Microsoft uses AtomPub in it's cloud [Azure] platform to talk between the data producers and consumers. Just a thought.