In seed application can a application service be packaged in web module? - packaging

For our web application we have written most of our code inside Resources classes. Now we want to unit test the web module with junit tests with "Simple Integration tests".
But we can only test our repositories and not the finders and other business logic with this kind of packaging.
Do we need to move our code from Resource classes to application services which can be injected in unit tests? If yes, can we write these services in web module? Normally we write these services in app module but in that case services don't have access to finders present in web

Your resources classes should only contain the code necessary to expose a REST API of your application: no business logic, no application (use-case) logic, no data access. The resource class should only use other components such as repositories, services and finders to achieve the desired result.
In SeedStack projects, especially if you're using the business framework, it is recommended that you respects the DDD layer model:
The domain layer is where the business is expressed, by aggregates containing domain objects (entities and value objects) and by domain services.
The application layer is responsible for driving the workflow of the application, executing the use cases of the system through application services. This layer can also be tested by unit tests and by simple integration tests.
The interface layer handles the interaction with other systems. In your case, this is done with REST resources. These resources can rely on finders when necessary to query the database.
You can test any of these components with unit tests and simple integration tests except for the REST resources since they depend upon a Web runtime environment. These can only be tested in Web integration tests (with Arquillian).
You can find an example of a finder tested by simple integration tests here and here.

Related

what does Swagger server stub mean?

What does the term Server Stub mean in the context of the Swagger ecosystem? How is it used?
From a swagger tutorial:
With SwaggerHub, you can easily generate a server stub (an API implementation stub) for Node.js, ASP.NET, JAX-RS, and other servers
and frameworks. The server stub is a good starting point for
implementing your API – you can run and test it locally, implement the
business logic for your API, and then deploy it to your server.
https://app.swaggerhub.com/help/apis/generating-code/server-stub
and a stub is:
method stub or simply stub in software development is a piece of code used to stand in for some other programming functionality. A stub may simulate the behavior of existing code (such as a procedure on a remote machine, such methods are often called mocks) or be a temporary substitute for yet-to-be-developed code. Stubs are therefore most useful in porting, distributed computing as well as general software development and testing.
https://en.wikipedia.org/wiki/Method_stub
Stub the API means : create à mock to serve examples described in swagger file.
This mock can be formatted in specific languages/ framework
Server stubbing can be quite powerful depending on the backend platform and framework you plan to use for your API.
For example, you may choose Apache (common in Linux environments) or ASP.NET (common for IIS). The server "stubs" being generated will typically be a deployable library to that specific platform. What you typically get is:
Routing to your business logic. The framework will handle the HTTP specification, but actually mapping from a "controller" to your service layer is being handled by the code generator, based on your API specification.
Serialization and Deserialization of your models (applies to strongly-typed languages like Java/C#).
AuthN/AuthZ may be handled, to some degree, based on the framework's support for your API's chosen auth scheme.
tl;dr: A server stub is intended to be a ready-to-deploy application that routes HTTP requests to your actual business logic on the backend.
From my experience and peers, I found stub to be a mock function or a placeholder function where you can fill in the proper implementation later.

Is there any way I can enforce an "API contract" when testing my web app's API and UI separately?

I'm developing an Ember.js app with a Phoenix API. I have followed someone's advice to keep the UI and API as separate projects, and I know that I can use ember-cli-mirage to mock my API during development and testing. But this makes me really nervous. I'm used to having a suite of integration tests that tests the UI and API together. I know for a fact that one day me or another developer is going to make a breaking change in one of the projects, and we won't realise it until users start complaining.
On the other hand, I really like the idea of mocking the API in the client where Ember.js is running. It should make development and testing really fast.
Is there a way that I can extract a high-level description of my API end points, and use that as a sanity check to make sure that my mocked API fulfills all of the requirements? For example, if I add or remove a route in my API server, I want my Ember.js tests to fail immediately if the mocked API doesn't match those changes. Because I know that this is going to happen one day. It's especially concerning if I want to use continuous deployment after successful builds.
Or should I just start up a real API server on the CI machine, and run my tests against that?
I kind of like the idea of enforcing an API contract, though. I could also reuse the principle in any future mobile or desktop apps. You get some guarantee of consistency without having to install a ton of dependencies and spin up a real API.
Another idea: Maybe I write a set of API acceptance tests, but run them against both the real and the mocked API. And then I could include the mocked API code (ember-cli-mirage) inside my main API repo, and link it into the Ember.js repo as a submodule.
How are people currently approaching this issue?
Your Ember tests should focus on the behavior of the client application itself, not the implementation details of your API. While it is more trouble to maintain a separate mocked instance of your API logic in ember-cli-mirage, in reality you should only be adding Mirage endpoints and behavior as your Ember tests require it.
If you add a route to your API server, and no Ember code interacts with it, there should be no need to write an Ember test that consumes a mocked Mirage endpoint.
If you remove a route or change behavior in your API, it makes sense that you would want any affected Ember tests to immediately fail, but rewriting those Ember tests and Mirage endpoints to mirror the new behavior is just the cost of doing business.
It's more work in the long run, but I think your tests will be more meaningful if you treat the API and your mocked endpoints in Mirage as separate concerns. Your Ember tests shouldn't test whether your server works - they should only verify that your Ember code works given known constraints.
Perhaps one way to avoid omissions in your test suites is to enforce a (human) policy wherein any change to the API's behavior is specced out in a formal testing DSL like Gherkin, at least to document the requirements, and then use that as the single reference for writing the new tests and code that will implement the changes.

Need advice about using repository pattern in an n-teir application

I have a web application that is developed using ASP.NET MVC.
The application follows the nth-tier architecture, and I have divided the application into 4 different project which are Model, Core, Framework and the web application.
The Model, Core and Framework are DLLs, the Model contains just my POCO classes, the Core contains my DbContext, repositories and Unit of Work implementations while my framework project contains classes that would be used directly by my MVC web application such as action-link extension, custom view engines e.t.c.
In addition to my framework I created a class called service which makes method calls to the repositories in my core DLL and method in the service class are called by my web application.
My question is: Is it ideal to channel method calls from the web application to the repository through my the service class in my framework DLL or just make a direct call to the Core DLLs?
Don't add an abstraction layer unless you require it. If you don't have a strong reason to add a service layer in the middle, you will end up implementing the Poltergeist anti-pattern, where sole purpose is to pass information to another object.
In general, calling your repository directly is perfectly fine so you have to analyze if you foresee any particular restriction disallowing this schema.

What is benefit of service logic testing and client logic testing in WCF web services testing?

I Develop WCF web services for enterprise project.I use NUnit testing in service logic business layer for test in server side and in client side I use WCFTestClient for invoke web services method.
I have to do automated test for my project,But I really don't know which approach is better?
I suggest you both. Service Logic testing is important from the perspective of the service as is, it must works just as expected and that's what you test.
Now, Client testing is also important from the perspective (as it names implies) of service clients. I mean, client testing helps you in the design of clear and usable service APIs. When you test what your service users will be doing you get another perspective of the situation.

How to mock web service call in a WF workflow?

I'm implementing a WCF web service based on WF. This web service consumes other web services which I'm not in charge of. So basically my service workflow contains several Send activities.
I'm following the TDD approach, so the service implementation is to be covered by unit tests. I want to test proper invocation of 3rd party services.
In a non-workflow case I would mock the external services via NMock. But in my case I cannot control the instantiation the workflow instance and I have no idea on how to trick the Send activities to use the mock objects instead of real services endpoints.
Although Unit Testing Workflows And Activities article on MSDN mentions mocks I couldn't find any complete example of mocking the remote end of Send activity.
Any idea on how to do that?
please try Moles framework. http://research.microsoft.com/en-us/projects/pex/
There are samples about how to mock the sharepoint service. I believe the same trick should apply to WF workflow.
I have tried to mock the sqlconnection, Entity framework, web service call, it works very neat. Basically, it can mock almost any .net objects.
Using ServiceAgents wrappers for your web services would be one possible way of doing it.
This is a pattern i have followed in previous projects of mine.
Since they are interface based, you can easily mock out the services.
There are other advantages to this pattern (besides unit testing) including being able to abstract your application from external dependencies to a certain extent. However it does add the overhead of creating another class layer on top of the services.