Here is a situation that I want to test with e2e, but I'm not sure the best way. During a specific workflow, an action requires the backend to go make a rest request. This request should never fail, but in the exceptional case that it does (network connectivity or unexpected downtime), I want to at least handle it gracefully and I want to use selenium to check that it would be handled gracefully in the UI. However, from the UI, the design dictates that there should be no way to get it in this error state via normal function.
The question is, should I code into the application some way of creating this exception via frontend actions just so selenium can check that it's handled gracefully? Would that make the test too synthetic to be useful? Or should I just not create an automated test for this requirement and pray that it never occurs?
Related
I have a series of tests that are dependent on a step in the middle (such as creating an account). The API that I'm using for this is a kind of brittle (which is a separate problem), and sometimes fails. I'd like to be able to just quit the tests in the middle there when that fails, instead of wait for TestCafe to fail the initial assertions for the next few tests that follow. Is there a way to get the test controller to stop, or signify to the fixture that the tests should stop? I immediately thought of Spock's #Stepwise annotation, but I can't find anything like that in the TestCafe docs.
The Stop on First Fail option stops the entire run once a failed test occurred. If I understand your scenario correctly, you could add an assertion for a successful account creation and if it fails, exit the entire run with this option.
CLI Documentation
API Documentation (under Parameters)
Our application has 35 web servers and around 100 different APIs being executed on it.
These APIs internally calls each other and execute independently as well.
We have automated test cases of around 30 APIs but some of our tests fail because the other APIs fail on which the API under test depends.
So how can we know through our automated test cases the reason for each test failure?
Example Scenario:
We have a test case to validate the API to fetch the User's Bank account balance.
Now we hit this API through rest Assured and tries to assert the expected output. This request goes first to the ledger server which further internally hits auth server to validate the request authenticity, then hits, then hits a counter server to log fetchBalance request, then hits several other servers to get the correct balance of the user and then responds to our request.
But the problem is that this may break at any instance and if it breaks, the ledger server returns always same error string- "Something failed underhood". Now debugging becomes a challenge. We have to go to each server and have to search for the logs to know the actual cause.
I want to write a solution which can trace complete lifecycle of this request and can report where it actually failed.
For this problem, you should be aware of the most common failure reasons. And then you can implement the strategy bases on the failure reasons.
Example: If you have send one request to the server that API may have some security validations and some processing steps and integration with different components.
If you can identify some failure points and can implement checkpoints against that.
Request failed at Security validation. You have possible error codes for that then write logic according to that
You have a failure at the processing step there could be a possible reason
If there is a failure at the integration point then there must be some error codes also. You can implement logic around them
Validate the state of data before each interaction with server. For example
assert expression1 : expression2
Where expression2 will be executed if expression1 fails. (This is a Groovy example, but you can modify this as needed.)
An example expression2 message could be something like: "Failure occured when trying to send 'so-and-so' request!".
Can you have automated regression/integration tests for Azure Logic Apps?
And if you can, how? ... especially in the context of CI/CD builds and deployments
... and if you can't, why not!!
There isn't any out-of-the-box tooling yet to provide automated testing of Azure Logic Apps. We have a few customers who have followed one of the following patterns. There is also this article that goes into detail on how to create a Logic App deployment template:
After deployment (using a release management tool like Visual Studio Release Management), a series of unit tests are run (writtin in something like C#) to test the Logic App.
Since a logic app could have any kind of trigger (on queue item, on HTTP request), the code usually performs the action and asserts the result.
A logic app in the resource group that can run a series of basic tests in a workflow. This one requires a bit more chewing on, but idea being you have a workflow that makes use of connectors or "calling nested apps" to perform basic validation tests (ensure connections are active, etc.)
It's something we have had discussions on from time-to-time, but would love to know if you have any thoughts on what types of tooling/configuration you'd want to configure for an app (remember that some apps "trigger" on something like a message in a queue or a file in FTP).
I would like to share one of the approach for LogicApp testing that my team has followed.
First level of validation is the ARM template deployment status (ProvisioningState) which should not have any errors.
After that we have developed test automation using the logic app sdk which does the following
Get auth token.
Execute a specific logic app trigger with a synthetic transaction.
Waits till the execution is completed.
Gets logic app & its action status (succeed, failed or skipped), validates it as per the expected scenario.
Gets the outputs from each action execution, validates them against an expected scenario.
Repeat above steps for all the various cases that logic app might go through.
Hook this all-in CI/CD :)
Deployed an LA, ran a synthetic transaction & validated the results.
Hope this helps.
I was tasked with creating a health check for our production site. It is a .NET MVC web application. There are a lot of dependencies and therefore points of failure e.g. a document repository, Java Web services, Site Minder policy server etc.
Management wants us to be the first to know if ever any point fails. Currently we are playing catch up if a problem arises, because it is the the client that informs us. I have written a suite of simple Selenium WebDriver based integration tests that test the sign in and a few light operations e.g. retrieving documents via the document api. I am happy with the result but need to be able to run them on a loop and notify IT when any fails.
We have a TFS build server but I'm not sure if it is the right tool for the job. I don't want to continuously build the tests, just run them. Also it looks like I can't define a build schedule more frequently than on a daily basis.
I would appreciate any ideas on how best achieve this. Thanks in advance
What you want to do is called a suite of "Smoke Tests". Smoke Tests are basically very short and sweet, independent tests that test various pieces of the app to make sure it's production ready, just as you say.
I am unfamiliar with TFS, but I'm sure the information I can provide you will be useful, and transferrable.
When you say "I don't want to build the tests, just run them." Any CI that you use, NEEDS to build them TO run them. Basically "building" will equate to "compiling". In order for your CI to actually run the tests, it needs to compile.
As far as running them, If the TFS build system has any use whatsoever, it will have a periodic build option. In Jenkins, I can specify a Cron time to run. For example:
0 0 * * *
means "run at 00:00 every day (midnight)"
or,
30 5 * 1-5 *
which means, "run at 5:30 every week day"
Since you are making Smoke Tests, it's important to remember to keep them short and sweet. Smoke tests should test one thing at a time. for example:
testLogin()
testLogout()
testAddSomething()
testRemoveSomething()
A web application health check is a very important feature. The use of smoke tests can be very useful in working out if your website is running or not and these can be automated to run at intervals to give you a notification that there is something wrong with your site, preferable before the customer notices.
However where smoke tests fail is that they only tell you that the website does not work, it does not tell you why. That is because you are making external calls as the client would, you cannot see the internals of the application. I.E is it the database that is down, is a network issue, disk space, a remote endpoint is not functioning correctly.
Now some of these things should be identifiable from other monitoring and you should definitely have an error log but sometimes you want to hear it from the horses mouth and the best thing that can tell you how you application is behaving is your application itself. That is why a number of applications have a baked in health check that can be called on demand.
Health Check as a Service
The health check services I have implemented in the past are all very similar and they do the following:
Expose an endpoint that can be called on demand, i.e /api/healthcheck. Normally this is private and is not accessible externally.
It returns a Json response containing:
the overall state
the host that returned the result (if behind a load balancer)
The application version
A set of sub system states (these will indicate which component is not performing)
The service should be resilient, any exception thrown whilst checking should still end with a health check result being returned.
Some sort of aggregate that can present a number of health check endpoints into one view
Here is one I made earlier
After doing this a number of times I have started a library to take out the main wire up of the health check and exposing it as a service. Feel free to use as an example or use the nuget packages.
https://github.com/bronumski/HealthNet
https://www.nuget.org/packages/HealthNet.WebApi
https://www.nuget.org/packages/HealthNet.Owin
https://www.nuget.org/packages/HealthNet.Nancy
How can I test error messages on MVVMCross?
I'm using ReportError method found on this sample on a Portable Class Library project:
http://slodge.blogspot.com.br/2012/05/one-pattern-for-error-handling-in.html
What are you actually looking to test?
If you are looking to unit test the 'hub' that listens for and republishes errors, then you should be able to do that easily - just create an nunit test and one or more mock subscribers - then you can test sending one or more messages.
If you are looking to test some of your services or view models to check that they correctly report errors, then create unit test for them with mocks which simulate the error conditions - in the test, Assert that the errors are correctly reported.
If you are looking to just manually trigger the ui as part of development, then either find some way to reliably trigger an error (e.g. providing an incorrect password or switching the phone to flight mode) or consider adding a debug ui to your app with buttons to trigger mock errors.
If you are looking for integration or acceptance level testing, then either write manual test steps to trigger errors - or consider automated solutions like Frank, Calabash, Telerik ui testing or the excellent Windows Phone Test Framework (http://m.youtube.com/watch?v=2JkJfHZDd2g) - although I might be biased on the last one
As an update on the error reporting mechanism itself, I have now moved in some of my apps to the more general messenger plugin for this type of reporting. But other apps still use the old mechanism.