How to E2E test a REST API that has some websocket endpoints? - selenium

I am looking to automate testing for an REST API that has some websocket endpoints.
The API has a feel similar to an application that can add/list serially connected sensors (typical REST CRUD) and then allow for the direct connection to them (websocket). There is currently no frontend and I would like to perform E2E and integration tests with real and mocked hardware.
Minimally, the solution would:
run via an agent on a testbed I or it created
do the typical CRUD tests (add sensor, edit sensor, etc.)
open a websocket connection (IE: '/websock?sensorID=10') and then have the typical expect script test behavior (wait for,read,write, assert). An Echo websocket would be an ideal first test.
The ideal solution would have this and:
incorporate the typical browser focused testing in the future
grab data from additional servers to verify behavior (IE: after a REST PUT, query the SQL database directly to verify)
integrate into the typical JIRA DevOps flow
I have looked at Selenium and Cypress, but it is not clear if or how they can satisfy these requirements as the documentation is almost entirely browser focused.

Related

Should I mock APIs in end-to-end testing?

When you are doing e2e tests for your application, you want to test the whole application, not some portions of it like unit tests or integration testing.
But in some situations, people do mock APIs.
For example, when you have a massive microservice as your back-end, which makes your e2e tests very slow, or beside your own API, you rely on other third-party APIs, which makes your e2e tests fail occasionally.
So you only want to make sure that your front-end application works well, what should you do?
In my company, we have a massive system with a really heavy database which makes e2e testing very ineffective. Is it right to mock APIs in such a scenario?
My understanding here is that if you want to test only your front-end application (what is not E2E testing in my opinion) you can use unit tests instead. If you still want to test the user interface from the browser, then you can mock the APIs' responses, but still not being E2E testing.
In case you want to perform an end-to-end testing, then you shouldn't mock any database or API call.
The exception here is a third-party API that is not under your control. In that specific case you can mock it to have less external dependency in your tests, but if that third party changes and you are not aware of it, you wont't notice if it's mocked. Said that, if you mock third-party APIs be sure you have a fluent communication with the API provider to get alerts on changes before your app fails.

Best way to do API shadowing on GCP

We are trying to transition from one microservice to another implementation. In order to properly test it in production, we are looking into shadowing the request in production to new service, and then comparing the result and logging any discrepancy. I am looking for the best tool than can do it out of the box. Previously we managed to do it with nginx, but I prefer an out of the box solution like Google cloud endpoints, or Apigee.

Is there a testing framework that will support GraphQL, web app testing, and mobile?

I have started to use Karate to test our mobile app which is using GraphQL and it is working good so far, though a bit of a learning curve for me as I am not a programmer by trade, but I need to look further into the future and be sure to find a framework that will also support our need to automate tests for our custom web applications as well. I would think Selenium would be the way to go so I am looking for a testing framework where I can test not only the GraphQL queries which our micro services/APIs are written, but also our web applications and our mobile app. We are a MS shop but if need be, as with Karate, we can venture into a different stack. Thanks!
Disclaimer: dev of Karate here.
I don't know about mobile, but teams have been successful mixing Karate and Selenium / WebDriver into a single integrated script, which is possible because of Karate's Java inter-op.
This is a Stack Overflow answer recommending the above approach and this answer is an update from the same team reporting success.
One more reason you may want to consider Karate is that it may be the only framework that allows you to re-use functional test scripts as performance tests, via Gatling integration (still in development).
Karate in my opinion is a "safe" choice as a testing framework, because it is technically Gherkin and "language neutral". This may be a surprise for those new to Karate, but it actually leans towards using JavaScript for scripting instead of Java.
Of course, it is worth mentioning that I have yet to find a framework that makes testing GraphQL as easy as how Karate does.
Selenium and Rest assured integration framework is the answer to your question. I created a test automation framework to test an ionic application which is using GraphQL api along with Rest webservices. I have used Testng and cucumber in my framework to perform web app side validations and used Rest assured along with GSON to validate the GraphQL and rest services. You can easily integrate appium in your framework along with selenium and rest assured to cater the need of mobile testing.

Why use OWIN TestServer?

This article shows how to host an entire web API stack in memory for testing using OWIN:
http://www.davidwhitney.co.uk/Blog/2015/01/07/testing-an-asp-net-webapi-app-in-memory/
Whereas this article shows using the OWIN TestServer to unit test controllers:
https://blog.jcorioland.io/archives/2014/04/01/using-owin-to-test-your-web-api-controllers.html
The difference I see is between the use of TestServer.Create and WebApp.Start<Startup>
What is the key difference and why would you choose one over the other?
Is it simply the difference between unit testing controller methods as web api calls versus end-to-end integration testing in memory?
When you do
TestServer.Create<Startup>() - you start just the in-memory instance using your startup file. The HttpClient that is inside TestServer is enough for integration testing in-memory. We are starting all the testservers inside one process, so this is not a limitation (currently 4 test servers are running together).
When you do
WebApp.Start<Startup>(Settings.WebApiUrl) - you start a web app on the url you provide. There is also another overload which accepts options: both urls and settings.
We are using this option only for specific cases. Such as:
Hosting url for SignalR client - it won't work without the URL,
where it could run
Contract based testing - verification of the contracts on provider
side. This also can be done only through started WebApp. (We're
using Pact.Net)

Is there any way I can enforce an "API contract" when testing my web app's API and UI separately?

I'm developing an Ember.js app with a Phoenix API. I have followed someone's advice to keep the UI and API as separate projects, and I know that I can use ember-cli-mirage to mock my API during development and testing. But this makes me really nervous. I'm used to having a suite of integration tests that tests the UI and API together. I know for a fact that one day me or another developer is going to make a breaking change in one of the projects, and we won't realise it until users start complaining.
On the other hand, I really like the idea of mocking the API in the client where Ember.js is running. It should make development and testing really fast.
Is there a way that I can extract a high-level description of my API end points, and use that as a sanity check to make sure that my mocked API fulfills all of the requirements? For example, if I add or remove a route in my API server, I want my Ember.js tests to fail immediately if the mocked API doesn't match those changes. Because I know that this is going to happen one day. It's especially concerning if I want to use continuous deployment after successful builds.
Or should I just start up a real API server on the CI machine, and run my tests against that?
I kind of like the idea of enforcing an API contract, though. I could also reuse the principle in any future mobile or desktop apps. You get some guarantee of consistency without having to install a ton of dependencies and spin up a real API.
Another idea: Maybe I write a set of API acceptance tests, but run them against both the real and the mocked API. And then I could include the mocked API code (ember-cli-mirage) inside my main API repo, and link it into the Ember.js repo as a submodule.
How are people currently approaching this issue?
Your Ember tests should focus on the behavior of the client application itself, not the implementation details of your API. While it is more trouble to maintain a separate mocked instance of your API logic in ember-cli-mirage, in reality you should only be adding Mirage endpoints and behavior as your Ember tests require it.
If you add a route to your API server, and no Ember code interacts with it, there should be no need to write an Ember test that consumes a mocked Mirage endpoint.
If you remove a route or change behavior in your API, it makes sense that you would want any affected Ember tests to immediately fail, but rewriting those Ember tests and Mirage endpoints to mirror the new behavior is just the cost of doing business.
It's more work in the long run, but I think your tests will be more meaningful if you treat the API and your mocked endpoints in Mirage as separate concerns. Your Ember tests shouldn't test whether your server works - they should only verify that your Ember code works given known constraints.
Perhaps one way to avoid omissions in your test suites is to enforce a (human) policy wherein any change to the API's behavior is specced out in a formal testing DSL like Gherkin, at least to document the requirements, and then use that as the single reference for writing the new tests and code that will implement the changes.