Currently here's our workflow. I push code, the CI/CD build it deployed to an environment, tester test it on there. But the problem is each build take 20 minutes, we can go back and forth if the developer makes mistake. How to avoid go back and forth? I'm think of without pushing or running the CI/CD, tester can check my local, that would be faster. But how to do it without interrupting local development, I've no clue.
Related
Apologies if this is open ended.
Currently my team and I are working on our End to end (E2E) testing strategy and we seem to be unsure whether we should be executing our E2E tests against our staging site or on our production site. We have gathered that there are pros and cons to both.
Pro Staging Tests
wont be corrupting analytics data on production.
Can detect failure before hitting production.
Pro Production Tests
Will be using actual components of the system including database and other configurations and may capture issues with prod configs.
I am sometimes not sure if we are conflating E2E with monitoring services (if such a thing exists). Does someone have an opinion on the matter?
In addition when are E2E tests run? Since every member of the system is being tested there doesn't seem to be an owner of the test suite making it hard to determine when the E2E should be run. We were hoping that we could run E2E in some sort of a pipeline before hitting production. Does that mean I should run these test when either the Front end / backend changes? Or would you rather just run the execution of the E2E on an interval regardless of any change?
In my team experience has shown that test automation is better done in a dedicated test server periodically and new code is deployed only after being tested several sessions in a row successfully.
Local test run is for test automation development and debugging.
Test server - for scheduled runs, because - no matter how good you are at writing tests, at some point they will become many hours in a row to run and you need a reliable statistic of them over time with fake data that wont break production server.
I disagree with #MetaWhirledPeas on the point of pursuing only fast test runs. Your priority should always be better coverage and reduced flakiness. You can always reduce the run time by paralelization.
Running in production - I have seen many situations when test results in a funny state of the official site that makes the company reputation go down. Other dangers are:
Breaking your database
Making purchases from non existent users and start losing money
Creating unnecessary strain on the official site api, that make the client experience during the run bad or even cause the server to stop completely.
So, in our team we have a dedicated manual tester for the production site.
You might not have all the best options at your disposal depending on how your department/environment/projects are set up, but ideally you do not want to test in production.
I'd say the general desire is to use fake data as often as possible, and curate it to cover real-world scenarios. If your prod configs and setup are different than your testing environment, do the hard work to ensure your testing environment configuration matches prod as much as possible. This is easier to accomplish if you're using CI tools, but discipline is required no matter what your setup may be.
When the tests run is going to depend on some things.
If you've made your website and dependencies trivial to spin up, and if you are already using a continuous integration workflow, you might be able to have the code build and launch tests during the pull request evaluation. This is the ideal.
If you have a slow build/deploy process you'll probably want to keep a permanent test environment running. You can then launch the tests after each deployment to your test environment, or run them ad hoc.
You could also schedule the tests to run periodically, but usually this indicates that the tests are taking too long. Strive to create quick tests to leave the door open for integration with your CI tools at some point. Parallelization will help, but your biggest gains will come from using cy.request() to fly through repetitive tasks like logging in, and using cy.intercept() to stub responses instead of waiting for a service.
Unfortunately, I am not much aware of these two terms and I have a feeling that I need to know more about these as I am approaching an app release.
so, if I am running the app on development mode am I not using exactly the same code as production? Like what does it actually change and whats the purpose of it?
If it's in the sense of server than that's understandable, I don't wanna mess with the server that's being used by the users, so I guess i need to connect to a second server - development, however, I am interested to know what does it change in my code? I am still gonna use the same locally stored project right?
Sorry for being so naive!
The development build is used - as the name suggests - for development reasons. You have Source Maps, debugging and often times hot reloading ability in those builds.
React Native includes some very useful tools for development: remote JavaScript debugging in Chrome, live reload, hot reloading, and an element inspector similar to the beloved inspector that you use in Chrome.
The production build, on the other hand, runs in production mode which means this is the code running on your client's side. The production build runs uglify and builds your source files into one or multiple minimized files. There's also no source maps or hot reloading included.
Further more , Production mode is most useful for two things. They are
Testing your app's performance, as Development slows your app down considerably and Catching bugs that only show up in production.
Hope this might help
https://docs.expo.io/versions/latest/workflow/development-mode/
I have to test the project code using Jenkins. In Jenkins, I am using automation testing throw selenium. then my question is I have test scripts on my laptop and using Google Cloud virtual machine SHH i have to set up and test in every push on git. Here is my 2 demand-
1. on demand-whenever we want all the test run we should just trigger the test run.
2. Whenever we deploy something on staging then test all the test cases
Regarding (1), you are always able to trigger a job manually through the Jenkins UI. No special configurations there.
Regarding (2), you can install a plugin that will integrate webhooks functionality into Jenkins. In my case, I like to use Generic Webhook Trigger for this purpose, as it has the flexibility that I need on my setups.
In order to trigger the job on every deploy to staging, and assuming that your deploys are automated, you will need to add a final step on the deploy script, to make an HTTP request to the webhook URL (eg. JENKINS_URL/generic-webhook-trigger/invoke?token=<your-token>
I don't fully understand your setup with your machine and the VM on GCloud, in any case, I believe that the test code should be available to the machine running the tests, and not be stored in a location that might be unavailable when the tests need to be run (as your laptop might be).
probably my use-case is specific, but i'm sure i'm not the only one.
I have quite big Rails application, full of Rspec/Cucumber tests. Usually it takes like 30-40 minutes to run everything from scratch on Intel i5. Yes, we are using guard, so it's not every time from very beginning. But it's annoying anyway, and i want to distribute load somehow.
Also i have another development workstation with i7, and my idea to run guard loop on it. This way, i need something to automate Rspec/Cucumber tests running via guard on remote machine, but general behaviour should be the same: i'm changing something, guard runs test for changed part on remote workstation without any additional movements from my side. I don't want to push to repo during development, of course we are using CI and local CI will be not very reasonable. And of course we are using parallel_tests, so my question not about sharing load between CPU cores.
Ideas and suggestions are very welcome.
You could share the files with the fast computer (via smb f.e.) and run the tests on the remote computer and check it via ssh?
You could mount your project working directory on the Remote machine and start Guard there, preferably over SSH so you see the console output. In addition you could use the GNTP notifier and send notifications from the remote machine to your development machine:
Ruby
notification :gntp, :host => 'development.local', :password => 'secret'
I'm very new to C.I. but I have recently inherited a project where Team City has just been implemented and I'm slowly getting my head around it. One thing we would like to do is run some Selenium Tests as part of the build process. I've created the selenium tests and can run them successfully via nunit-console on my development machine. The build server builds the project and then deploys it (A web forms application as it happens) to a staging server.
Before each selenium test we set the database to a known state, i.e. to only have certain records in place - that way each test is independent of the others. The problem is the staging server will be used by real "human" testers so this would cause them a problem with the database continually being reset (records being removed etc.) The question is should I really also deploy the application to a virtual directory on the build server and run the selenium tests against that and only deploy to the staging server if those tests pass?
Or have I got this stuff completely wrong? If so how do you do it in your organisation?
I suggest that you do not mix your automated and manual testing by allowing your testers to access the server that is staged for your automated tests. This can cause false negatives both in your automated and your manual tests. These 'bugs' are indeterministic and more than likely never reproducible (a very bad news). This will cause you a lot of unnecessary 'bug reports' and build failures.
So here is what you can do...
In addition to your current setup, you can create an extra staged server for your manual testers. This is the least you should do. You should probably create several of them, one for each tester.
And here comes the rant...
In my current project we recently found out that our testers (we had ~10 of them) reused one server. They claimed that since our app is going to have multiple concurrent users, it was a good idea that while they are testing the individual functionalities, they are also testing how these functionalities act while multiple users are working on the same server. WRONG!
If multiple users are a concern, there should be test cases for the specific concerns. If functionality#1 can interfere with functionality#2, it should be specifically tested and not just be 'tested-by-luck'.
Before this was explained to our manual testers, we had many false bug reports due to the fact that one tester was simply stepping on another tester's toes. (e.g. tester1 deleted a record that tester2 introduced to the system, etc...). This created a lot of unnecessary bug reports and these bugs were never reproducible.
Sorry about the rant, I hoped this still helps :)