We're working on a Rails 3 project and testing using Capybara/RSpec. The problem is that the staging and production environment differ somewhat. Sometimes, the tests will run fine and there will be no problem on staging, but will break in production.
An example is when we added a middleware that uses Rack::File to send files. The application sent the header 'X-Sendfile' which works under Apache but Nginx expects 'X-Accel-Redirect'.
I'm looking for the best way to run a battery of tests when we push to production. Has anyone done this? Ideally the tests should not be run on the production server itself.
The tests would basically cover the core features of our product and would be different from the tests we are currently running.
Thanks a lot
What I ended up doing is have another set of RSpec test in a production_test environment that has read-only access to the database. I use the capybara-webkit driver and each test starts by visiting the complete URL for that test.
Related
Apologies if this is open ended.
Currently my team and I are working on our End to end (E2E) testing strategy and we seem to be unsure whether we should be executing our E2E tests against our staging site or on our production site. We have gathered that there are pros and cons to both.
Pro Staging Tests
wont be corrupting analytics data on production.
Can detect failure before hitting production.
Pro Production Tests
Will be using actual components of the system including database and other configurations and may capture issues with prod configs.
I am sometimes not sure if we are conflating E2E with monitoring services (if such a thing exists). Does someone have an opinion on the matter?
In addition when are E2E tests run? Since every member of the system is being tested there doesn't seem to be an owner of the test suite making it hard to determine when the E2E should be run. We were hoping that we could run E2E in some sort of a pipeline before hitting production. Does that mean I should run these test when either the Front end / backend changes? Or would you rather just run the execution of the E2E on an interval regardless of any change?
In my team experience has shown that test automation is better done in a dedicated test server periodically and new code is deployed only after being tested several sessions in a row successfully.
Local test run is for test automation development and debugging.
Test server - for scheduled runs, because - no matter how good you are at writing tests, at some point they will become many hours in a row to run and you need a reliable statistic of them over time with fake data that wont break production server.
I disagree with #MetaWhirledPeas on the point of pursuing only fast test runs. Your priority should always be better coverage and reduced flakiness. You can always reduce the run time by paralelization.
Running in production - I have seen many situations when test results in a funny state of the official site that makes the company reputation go down. Other dangers are:
Breaking your database
Making purchases from non existent users and start losing money
Creating unnecessary strain on the official site api, that make the client experience during the run bad or even cause the server to stop completely.
So, in our team we have a dedicated manual tester for the production site.
You might not have all the best options at your disposal depending on how your department/environment/projects are set up, but ideally you do not want to test in production.
I'd say the general desire is to use fake data as often as possible, and curate it to cover real-world scenarios. If your prod configs and setup are different than your testing environment, do the hard work to ensure your testing environment configuration matches prod as much as possible. This is easier to accomplish if you're using CI tools, but discipline is required no matter what your setup may be.
When the tests run is going to depend on some things.
If you've made your website and dependencies trivial to spin up, and if you are already using a continuous integration workflow, you might be able to have the code build and launch tests during the pull request evaluation. This is the ideal.
If you have a slow build/deploy process you'll probably want to keep a permanent test environment running. You can then launch the tests after each deployment to your test environment, or run them ad hoc.
You could also schedule the tests to run periodically, but usually this indicates that the tests are taking too long. Strive to create quick tests to leave the door open for integration with your CI tools at some point. Parallelization will help, but your biggest gains will come from using cy.request() to fly through repetitive tasks like logging in, and using cy.intercept() to stub responses instead of waiting for a service.
I have to test the project code using Jenkins. In Jenkins, I am using automation testing throw selenium. then my question is I have test scripts on my laptop and using Google Cloud virtual machine SHH i have to set up and test in every push on git. Here is my 2 demand-
1. on demand-whenever we want all the test run we should just trigger the test run.
2. Whenever we deploy something on staging then test all the test cases
Regarding (1), you are always able to trigger a job manually through the Jenkins UI. No special configurations there.
Regarding (2), you can install a plugin that will integrate webhooks functionality into Jenkins. In my case, I like to use Generic Webhook Trigger for this purpose, as it has the flexibility that I need on my setups.
In order to trigger the job on every deploy to staging, and assuming that your deploys are automated, you will need to add a final step on the deploy script, to make an HTTP request to the webhook URL (eg. JENKINS_URL/generic-webhook-trigger/invoke?token=<your-token>
I don't fully understand your setup with your machine and the VM on GCloud, in any case, I believe that the test code should be available to the machine running the tests, and not be stored in a location that might be unavailable when the tests need to be run (as your laptop might be).
I am aiming to use automated testing to ensure that specific pages of a website are loading, rather than look/feel of UI elements or performance testing.
I have set up a number of selenium scripts, using ruby, which are executable locally for each test. My aim is to host that somewhere and add some form of text/email notification if one of the tests fail.
What is the best way to go about this?
Presumably some sort of linux server setup with selenium running headless from it could work. Would it be best to run this from some sort of rails or sinatra app with scheduling?
I've been working on a webdriver framework for a while now, I guess it is
keyword driven now. We would like for there to be a central place for users to
store tests, preferably on a wiki, but then when they are run they would open up
the browser on users local machine.
I originally started working using Fitnesse, which works great for storing the
tests however when we hosted it on a server when a user tries to run a test it
opens the browser on the server which the user can't view. Does anyone know a
way that I could force Fitnesse to open the users local browser or display the
browser to the user? Or do you know another framework/way to store tests in a
central place but run them in local.
I've been looking at sending through the local users ip through a fixture to start up the initial framework, I was hoping that fitnesse would already know the ip.
Thanks,
James
You can either find a framework that does what you want, or the bare minimum would be to create a thin wrapper that copies the test dll's and executeable to a machine and executes psexec to execute the tests on the remote machine. You could probably write the entire thing in maybe 20 lines of code.
I'm very new to C.I. but I have recently inherited a project where Team City has just been implemented and I'm slowly getting my head around it. One thing we would like to do is run some Selenium Tests as part of the build process. I've created the selenium tests and can run them successfully via nunit-console on my development machine. The build server builds the project and then deploys it (A web forms application as it happens) to a staging server.
Before each selenium test we set the database to a known state, i.e. to only have certain records in place - that way each test is independent of the others. The problem is the staging server will be used by real "human" testers so this would cause them a problem with the database continually being reset (records being removed etc.) The question is should I really also deploy the application to a virtual directory on the build server and run the selenium tests against that and only deploy to the staging server if those tests pass?
Or have I got this stuff completely wrong? If so how do you do it in your organisation?
I suggest that you do not mix your automated and manual testing by allowing your testers to access the server that is staged for your automated tests. This can cause false negatives both in your automated and your manual tests. These 'bugs' are indeterministic and more than likely never reproducible (a very bad news). This will cause you a lot of unnecessary 'bug reports' and build failures.
So here is what you can do...
In addition to your current setup, you can create an extra staged server for your manual testers. This is the least you should do. You should probably create several of them, one for each tester.
And here comes the rant...
In my current project we recently found out that our testers (we had ~10 of them) reused one server. They claimed that since our app is going to have multiple concurrent users, it was a good idea that while they are testing the individual functionalities, they are also testing how these functionalities act while multiple users are working on the same server. WRONG!
If multiple users are a concern, there should be test cases for the specific concerns. If functionality#1 can interfere with functionality#2, it should be specifically tested and not just be 'tested-by-luck'.
Before this was explained to our manual testers, we had many false bug reports due to the fact that one tester was simply stepping on another tester's toes. (e.g. tester1 deleted a record that tester2 introduced to the system, etc...). This created a lot of unnecessary bug reports and these bugs were never reproducible.
Sorry about the rant, I hoped this still helps :)