Automation job with jenkins is fetching tests from cache but not actually executing them. How to actually execute tests? - selenium

We have automation tests that run on jenkins slave, however some times job get's finished successfully showing 100% tests passed but what is happening actually is, these tests are being fetched from cache and not actually getting executed. We don't want this to happen as it is not reliable to depend on some old cache results. Suggest ways to avoid this, thank you.
I have nothing but to disable and enable job, re executing the job but of no use.

Quite strange, the jenkins in each execution runs a new suite, you can check the slave machine and then the browser by clearing the cache. Look closely at the build executions is the number #
check browser
clear cache
clean project on each run
check slave machine

Related

Force a TestCafé to abort the execution of a test if it get stuck

I am working with TestCafé 1.8.1 and testcafe-browser-provider-electron 0.0.14. Let's say that an issue in the application force it to get stuck and hang one test execution, and for that reason, the rest of the test in my suite cannot continue executing. Is there a way to force TestCafé to abort the execution of that test after a timeout and continue running the rest of the test in the suite?
I have faced this issue several times and it's a problem because I am not able to see the results of the rest of the "good" tests, just because of one test that hanged the whole execution.
TestCafe reloads a browser and restarts the latest test if the application got stuck. Currently TestCafe cannot drop this test in such a case.

E2E test passed the local run but failed in Jenkins (protractor and jasmine 2)

We have e2e test integrated with Jenkins system. For a few weeks this test successfully ran both, locally and from Jenkins (as a part of the build pipeline).
At the end of Sprint, I modified the script to reflect Sprint changes and made sure it passed locally. Then, I merged the changes with master. Now, e2e runs from Jenkins are failing 100% of the time, while when I locally connect to QA envs there is no problem.
The error I am getting is - Element is not clickable at point (x, y) which I cannot reproduce locally.
The server doesn't have a real screen so I cannot go out there and see what's going on. Resolutions are perfectly matching. I have other people running this test locally and there is no problem.
What could possibly cause these failures and how do I troubleshoot this problem?
Thanks for your help!
Its a question from 1000ft and pretty difficult to identify where exactly the issue could be but I listed down some probable causes/debugging tips that could help you
1.Whats your checkout strategy from source code repository? Check job workspace and it should have the most recent code and check if its indeed the latest one.
May be configure Job to always pick a new version instead of 'update'
2.Add a reporter based on the test framework you are using especially the ones which provide screenshots. Refer my blog for more details -
3.Check the stack trace of your error from Jenkins console report and verify the exact trigerring point

With Continuous Integration, why are tests run after committing instead of before?

While I only have a github repository that I'm pushing to (alone), I often forget to run tests, or forget to commit all relevant files, or rely on objects residing on my local machine. These result in build breaks, but they are only detected by Travis-CI after the erroneous commit. I know TeamCity has a pre-commit testing facility (which relies on the IDE in use), but my question is with regards to the current use of continuous integration as opposed to any one implementation. My question is
Why aren't changes tested on a clean build machine - such as those which Travis-CI uses for post-commit tesing - before those changes are committed?
Such a process would mean that there would never be build breaks, meaning that a fresh environment could pull any commit from the repository and be sure of its success; as such, I don't understand why CI isn't implemented using post-commit testing.
I preface my answer with the details that I am running on GitHub and Jenkins.
Why should a developer have to run all tests locally before committing. Especially in the Git paradigm that is not a requirement. What if, for instance, it takes 15-30 minutes to run all of the tests. Do you really want your developers or you personally sitting around waiting for the tests to run locally before your commit and push your changes?
Our process usually goes like this:
Make changes in local branch.
Run any new tests that you have created.
Commit changes to local branch.
Push local changes remotely to GitHub and create pull request.
Have build process pick up changes and run unit tests.
If tests fail, then fix them in local branch and push them locally.
Get changes code reviewed in pull request.
After approval and all checks have passed, push to master.
Rerun all unit tests.
Push artifact to repository.
Push changes to an environment (ie DEV, QA) and run any integration/functional tests that rely on a full environment.
If you have a cloud then you can push your changes to a new node and only after all environment tests pass reroute the VIP to the new node(s)
Repeat 11 until you have pushed through all pre-prod environments.
If you are practicing continuous deployment then push your changes all the way to PROD if all testing, checks, etc pass.
My point is that it is not a good use of a developers time to run tests locally impeding their progress when you can off-load that work onto a Continuous Integration server and be notified of issues that you need to fix later. Also, some tests simply can't be run until you commit them and deploy the artifact to an environment. If an environment is broken because you don't have a cloud and maybe you only have one server, then fix it locally and push the changes quickly to stabilize the environment.
You can run tests locally if you have to, but this should not be the norm.
As to the multiple developer issue, open source projects have been dealing with that for a long time now. They use forks in GitHub to allow contributors the chance to suggest new fixes and functionality, but this is not really that different from a developer on the team creating a local branch, pushing it remotely, and getting team buy-in via code review before pushing. If someone pushes changes that break your changes then you try to fix them yourself first and then ask for their help. You should be following the principle of "merging early and often" as well as merging in updates from master to your branch periodically.
The assumption that if you write code and it compiles and tests are passed locally, no builds could be broken is wrong. It is only so, if you are the only developer working on that code.
But let's say I change the interface you are using, my code will compile and pass tests
as long as I don't get your updated code That uses my interface.
Your code will compile and pass tests as long as you don't get my update in the interface.
And when we both check in our code, the build machine explodes...
So CI is a process which basically say: put your changes in as soon as possible
and test them in the CI server (it should be of course compiled and tested locally first).
If all developers follow those rules,
the build will still break, but we will know about it sooner rather than later.
The CI server is not the same as the version control system. The CI server, too, checks the code out of the repository. And therefore the code has already been committed when it gets tested on the CI server.
More extensive tests may be run periodically, rather than at time of checking in, on whatever is the current version of the code at the time of testing. Think of multi-platform tests or load tests.
Generally, of course, you'll unit test your code on your development machine before checking it in.

Re-running failed and not-run tests in IntelliJ IDEA

Let me describe a simple use-case:
Running all tests in our project may take up to 10 minutes.
Sometimes I see an obvious bug in my code after the first failed test, so I want to stop running all tests, fix the bug and re-run them. Unfortunately, I can either re-run all tests from the beginning, or re-run failed tests only.
Is there a plugin for IDEA which allows me to re-run failed tests AND tests, which weren't yet executed when I pressed "STOP"?
Atlassian has the solution for your problem: Clover. But it is commercial.
This goes against the idea of a test suite. Normally you want to run all your tests specifically so you know you haven't broken anything somewhere unexpected. If you change the code and then run a subset of the tests, the possibility exists that you broke something and one of the skipped tests would have failed. This is a case of not getting your cake and eating it too.
If you find a bug in an early test, by all means stop the suite. Fix the bug but then run the suite from the beginning.

Firebug and Selenium: Performance

I'm a big fan of Firebug - I use it all the time for my web development needs. That said, one of the things I noticed with Firebug is that it significantly slows down the page. In particular, if Firebug is on when a (local) Selenium script is running, the script takes 2-3 times as long to execute, and I sometimes even see timeout errors. Their per-site activation model doesn't help here at all - I'm developing and testing that same site.
I'd like to be able to turn Firebug OFF right before my Selenium script starts, and turn it back on when Selenium is done (or, in the worst case, just keep it off - the biggest annoyance is launching Selenium only to find out that some tests failed for no apparent reason).
My favored solution for this is to make a new, separate Firefox profile (run firefox -ProfileManager), and launch your Selenium scripts using that profile instead. It'll be clean of everything except what you put into it. That way, as little as possible from your personal environment will taint your development environment and you'll maintain a clean separation.
I typically don't run tests from the same machine I develop on. If you can setup a separate test machine where you deploy and run the tests, you can keep Firefox, IE, etc, free of plugins/add-ons like firebug that might get in the way of your tests and avoid this problem completely.
Running your tests on a separate machine also frees your dev machine so that you can continue working while your tests are running. I'm not sure about your situation specifically, but think about when you have hundreds or thousands of test cases running, you don't want to be sitting there waiting for them to finish. You want to be able to work while it runs, view the report it generates, and investigate if necessary.
You could try the alpha builds of Firebug 1.4. The activation/suspend model has changed in this version to a simpler model: it is activated when you see the panel, otherwise it is in suspended mode, see http://blog.getfirebug.com/?p=124 for more information.