Is there any way I can enforce an "API contract" when testing my web app's API and UI separately? - api

I'm developing an Ember.js app with a Phoenix API. I have followed someone's advice to keep the UI and API as separate projects, and I know that I can use ember-cli-mirage to mock my API during development and testing. But this makes me really nervous. I'm used to having a suite of integration tests that tests the UI and API together. I know for a fact that one day me or another developer is going to make a breaking change in one of the projects, and we won't realise it until users start complaining.
On the other hand, I really like the idea of mocking the API in the client where Ember.js is running. It should make development and testing really fast.
Is there a way that I can extract a high-level description of my API end points, and use that as a sanity check to make sure that my mocked API fulfills all of the requirements? For example, if I add or remove a route in my API server, I want my Ember.js tests to fail immediately if the mocked API doesn't match those changes. Because I know that this is going to happen one day. It's especially concerning if I want to use continuous deployment after successful builds.
Or should I just start up a real API server on the CI machine, and run my tests against that?
I kind of like the idea of enforcing an API contract, though. I could also reuse the principle in any future mobile or desktop apps. You get some guarantee of consistency without having to install a ton of dependencies and spin up a real API.
Another idea: Maybe I write a set of API acceptance tests, but run them against both the real and the mocked API. And then I could include the mocked API code (ember-cli-mirage) inside my main API repo, and link it into the Ember.js repo as a submodule.
How are people currently approaching this issue?

Your Ember tests should focus on the behavior of the client application itself, not the implementation details of your API. While it is more trouble to maintain a separate mocked instance of your API logic in ember-cli-mirage, in reality you should only be adding Mirage endpoints and behavior as your Ember tests require it.
If you add a route to your API server, and no Ember code interacts with it, there should be no need to write an Ember test that consumes a mocked Mirage endpoint.
If you remove a route or change behavior in your API, it makes sense that you would want any affected Ember tests to immediately fail, but rewriting those Ember tests and Mirage endpoints to mirror the new behavior is just the cost of doing business.
It's more work in the long run, but I think your tests will be more meaningful if you treat the API and your mocked endpoints in Mirage as separate concerns. Your Ember tests shouldn't test whether your server works - they should only verify that your Ember code works given known constraints.
Perhaps one way to avoid omissions in your test suites is to enforce a (human) policy wherein any change to the API's behavior is specced out in a formal testing DSL like Gherkin, at least to document the requirements, and then use that as the single reference for writing the new tests and code that will implement the changes.

Related

Postman Automated Collection Generation

has anyone done an enterprise integration of their public API with Postman? Checking Postman pages it seems like everything is straightforward, however, I have some concerns:
I don't see the way to automatically install pre-request scripts. Pre-request scripts allow an easy and straightforward way to call the endpoints without going through authentication step manually.
If you use sync with Github you'll need to give Postman full access. Not sure how people work around that.
You need to convert swagger to postman definition. Default Postman has limited nested levels of API schema, which means your API documentation will need additional processing step
So I don't know if it's worth integrating API release to the Postman with the internal API management system, or rather have a simple script on a virtual machine.

Should I mock APIs in end-to-end testing?

When you are doing e2e tests for your application, you want to test the whole application, not some portions of it like unit tests or integration testing.
But in some situations, people do mock APIs.
For example, when you have a massive microservice as your back-end, which makes your e2e tests very slow, or beside your own API, you rely on other third-party APIs, which makes your e2e tests fail occasionally.
So you only want to make sure that your front-end application works well, what should you do?
In my company, we have a massive system with a really heavy database which makes e2e testing very ineffective. Is it right to mock APIs in such a scenario?
My understanding here is that if you want to test only your front-end application (what is not E2E testing in my opinion) you can use unit tests instead. If you still want to test the user interface from the browser, then you can mock the APIs' responses, but still not being E2E testing.
In case you want to perform an end-to-end testing, then you shouldn't mock any database or API call.
The exception here is a third-party API that is not under your control. In that specific case you can mock it to have less external dependency in your tests, but if that third party changes and you are not aware of it, you wont't notice if it's mocked. Said that, if you mock third-party APIs be sure you have a fluent communication with the API provider to get alerts on changes before your app fails.

How to E2E test a REST API that has some websocket endpoints?

I am looking to automate testing for an REST API that has some websocket endpoints.
The API has a feel similar to an application that can add/list serially connected sensors (typical REST CRUD) and then allow for the direct connection to them (websocket). There is currently no frontend and I would like to perform E2E and integration tests with real and mocked hardware.
Minimally, the solution would:
run via an agent on a testbed I or it created
do the typical CRUD tests (add sensor, edit sensor, etc.)
open a websocket connection (IE: '/websock?sensorID=10') and then have the typical expect script test behavior (wait for,read,write, assert). An Echo websocket would be an ideal first test.
The ideal solution would have this and:
incorporate the typical browser focused testing in the future
grab data from additional servers to verify behavior (IE: after a REST PUT, query the SQL database directly to verify)
integrate into the typical JIRA DevOps flow
I have looked at Selenium and Cypress, but it is not clear if or how they can satisfy these requirements as the documentation is almost entirely browser focused.

Is there a testing framework that will support GraphQL, web app testing, and mobile?

I have started to use Karate to test our mobile app which is using GraphQL and it is working good so far, though a bit of a learning curve for me as I am not a programmer by trade, but I need to look further into the future and be sure to find a framework that will also support our need to automate tests for our custom web applications as well. I would think Selenium would be the way to go so I am looking for a testing framework where I can test not only the GraphQL queries which our micro services/APIs are written, but also our web applications and our mobile app. We are a MS shop but if need be, as with Karate, we can venture into a different stack. Thanks!
Disclaimer: dev of Karate here.
I don't know about mobile, but teams have been successful mixing Karate and Selenium / WebDriver into a single integrated script, which is possible because of Karate's Java inter-op.
This is a Stack Overflow answer recommending the above approach and this answer is an update from the same team reporting success.
One more reason you may want to consider Karate is that it may be the only framework that allows you to re-use functional test scripts as performance tests, via Gatling integration (still in development).
Karate in my opinion is a "safe" choice as a testing framework, because it is technically Gherkin and "language neutral". This may be a surprise for those new to Karate, but it actually leans towards using JavaScript for scripting instead of Java.
Of course, it is worth mentioning that I have yet to find a framework that makes testing GraphQL as easy as how Karate does.
Selenium and Rest assured integration framework is the answer to your question. I created a test automation framework to test an ionic application which is using GraphQL api along with Rest webservices. I have used Testng and cucumber in my framework to perform web app side validations and used Rest assured along with GSON to validate the GraphQL and rest services. You can easily integrate appium in your framework along with selenium and rest assured to cater the need of mobile testing.

Does Openshift Origin 1.1 REST API allow to create new applications based on templates?

We are developing a custom console to manage development environments. We have several application templates preloaded in openshift, and whenever a developer wants to create a new environment, we would need to tell openshift (via REST API) to create a new application based on one of those templates (oc new-app template).
I can't find anything in the REST API specification. Is there any alternative way to do this?
Thanks
There is no single API that today creates all of that in one go. The reason is that the create flow is intended to span multiple disjoint API servers (today, Kube and OpenShift resources can be created at once, and in the future, individual Kube extensions). We wanted to preserve the possibility that a client was authenticated to each individual API group. However, it makes it harder to write easy clients like this, so it is something we plan on adding.
Today the flow from the CLI and WebUI is:
Fetch the template
Invoke the POST /processedtemplates endpoint
For each "object" returned invoke the right create call.