Should Cypress Automation testing be written for local environment or staging? - automation

I am a beginner in Cypress Automation Testing. I have one confusion. When we need to add our Automation scripts to run with GitHub workflows to trigger when we push a commit, for what environment should we write tests? In the local environment at localhost or for the staging site of the project?
Could anyone please get my confusion cleared on this Automation Testing and how it should be written and How can we add Cypress Automation Tests with GitHub CI/CD?
Thanks.

Ok, let me give this a shot. Of course, I do not know the exact setup of the project that you are working on, but let me give you some pointers, so you can decide for yourself what works best in your setting.
My answer is based on the assumption that you are building an automated regression test set in Cypress with the primary goal to prevent production incidents. In addition, it aims save you tons of 'manual testing' for each release to production because you want to make sure everything is still working properly.
First of all, you want your automated tests to run on a stable environment(*). If the environment is not stable, many tests will fail for many reasons, and those are usually not the right ones. You'll spend more time figuring out why your tests are failing, than actually catching issues with it. This makes a local, dev environment not really suited for the task, so I would not pick a localhost environment for this. Especially not when you have multiple developers working in your team, each with their own localhost.
A test environment is already a way more stable environment. You want your tests to only fail when you have an actual issue on your hands. As a rule of thumb, the 'higher' you go, the more stable.
Second, you want to catch the issues early in the game, so I would definitely make sure that the tests can run on the environment where all code comes together for the first time (in other words, the environment that has the master branch or whatever your team calls that branch). This is usually the test environment. In my projects, I initially build the set for this environment, and ideally, I run it daily. Your tests won't always pass here (bonus if they do), and that is OK... as long as you understand why they don't ;-)
Some things to keep in mind are integrations or connecting systems, and whether you need those for your tests to pass. In general, you don't want to be (too) dependent on (third-party) integrations for you test cases to go green. Sometimes, when those integrations are vital to the process that you need to test, it is inevitable. However, integrations are often not (fully) set up on test/lower environments. There are workarounds for this, like stubs, but let's not get into that now - that's a whole different topic.
Third, you want your tests to run on a production-like environment on the code exactly in the state that it goes to production. This is usually the acceptance, staging or pre-production environment, i.e. the last one before production. These environments often have all integrations in place and are often very similar to production. If you find an issue here, it's almost guaranteed that it is also an issue in production. This is IMO where you want to integrate your tests into your CI/CD pipeline. Ideally, your full automated set is in the pipeline, but in practice, you should only add the tests that are stable and robust, otherwise your production deployments will be blocked very often.
So, long story short, my advice: write your tests for your test environment, where you do your 'manual testing' (I hate that term BTW, all testing is manual... as if there is such a thing as 'manual coding') and run it early and often. Then put the stable ones in the pipeline of the production deployment. If you only have local, staging and production, it should be staging.
If your developers want to run the set on their local environments, they can still do that - you can share the tests with them or even better, they can take it from the repository and run it locally - but I don't think you should make it part of the deployment process always and everywhere. It will slow down your process massively.
You can work with environment variables to easily switch for the environment where you want to run your tests: https://docs.cypress.io/guides/guides/environment-variables#Setting
I hope this helps. I'm looking forward to read what others have to say about this, too.
Happy Testing!
Jackie
PS. I see that you also asked about how to add Cypress to your CI/CD pipeline. I think that should be a completely separate topic. It is also way too high level to answer. Maybe it's best to start here: https://docs.cypress.io/guides/continuous-integration/introduction#What-you-ll-learn
(*) I'm talking stable environment here, but this also includes stable code and even a stable application. If your application and code is in a very early stage, really ask yourself whether you already want to start automating your functional UI tests in Cypress - chances are that many things will change (many times) and you'll spend hours updating your tests. Maybe it is better to only think about the scenarios that you want to automate at that stage of the project.

Related

how multiple automation testers work in same selenium project

We are three testers and going to prepare automation project with selenium and java code so what are the steps for environment setup , scripts integration and running the testcases and getting the results for the whole project suits
So there are a few things we have to use in order to allow multiple engineers to work on the same framework.
Step 1) Creating the framework, assuming you know how to do this already, you have working tests you can skip this stage, however if not please follow the tutorial i link below.
http://toolsqa.com/selenium-webdriver/
Step 2) Creating a REPO, my preference is GitHub, you can use any git repo however i will post the guide to set one up with GitHub, its a similar process for all. This will allow you to merge code properly without causing conflicts.
https://help.github.com/articles/create-a-repo/
Step 3) Source Control program - to push, pull and fetch from your GitHub Repo, you can do this from Command Prompt however i find cloning the repo into a program like 'SourceTree' is really easy, so i've posted that below.
https://confluence.atlassian.com/get-started-with-sourcetree
If you follow these 3 guides, you will be able to have your automation test scripts on GitHub by the end of the day.
If you have any more questions please do not hesitate to ask.
All the best, Jack
The easiest and most logical way to do this would be to create one branch in your CVS (git or SVN, etc) and have each person setup the dev environment in the same way. Work exactly like developers and pull code before you check-in/commit (this will ensure that one small error does not break your framework) and swear to resolve conflicts during merge (to ensure you don't step on each others' toes).
Also, before you kick off, agree on a standard of coding (including package naming, design pattern usage, filename and methodname usage) and if this is in sync with the dev coding standards in your company, even better.
There will be a few hiccups along the way, but experience is the best way to create a process for your development and check-in practices.
Good luck with your new project and happy coding!
You have asked two questions, in my opinion the answer of your questions is.
how multiple automation testers work in same selenium project - You can use any version control system, Git Hub is the best option which gives you a lot of facilities. You all three can work on same project at same time or you can go for any centralized version control system like tortoise svn which is not much likely used now a days. I will suggest Git Hub for that.
what are the steps for environment setup , scripts integration and running the test cases and getting the results for the whole project suits - It depends on various factors like application and the kind of framework you want to use, there are many frameworks which are widely used for automation testing like Modular Framework, Data Driven, Keyword Driven, BDD, Cucumber, TestNg etc or if you have bandwidth and time you can design your custom framework as per the needs.
I hope I put some glimpse on your queries.
Thanks

Frontend testing: what and how to test, and what tool to use?

I have been writing tests for my Ruby code for a while, but as a frontend developer I am obviously interested in bring this into the code I write for my frontend code. There is quite a few different options which I have been playing around with:
CasperJS
Capybara & Rspec
Jasmine
Cucumber or just Rspec
What are people using for testing? And further than that what do people test? Just JavaScript? Links? Forms? Hardcoded content?
Any thoughts would be greatly appreciated.
I had the same questions a few months ago and, after talking to many developers and doing a lot of research, this is what I found out. You should unit test your JavaScript, write a small set of UI integration tests and avoid record and playback testing tools. Let me explain that in more detail.
First, consider the test pyramid. This is a interesting analogy created by Mike Cohn that will help you decide which kind of testing you should be doing. At the bottom of the pyramid are the unit tests, which are solid and provide fast feedback. These should be the foundation of your test strategy and thus occupy the largest part of the pyramid. At the top, you have the UI tests. Those are the tests that interact with your UI directly, like Selenium does for example. Although these tests might help you find bugs, they are more expensive and provide very slow feedback. Also, depending on the tool you use, they become very brittle and you will end up spending more time maintaining these tests than writing actual production code. The service layer, in the middle, includes integration tests that do not require an UI. In Rails, for instance, you would test your REST interface directly instead of interacting with the DOM elements.
Now, back to your question. I found out that I could greatly reduce the number of bugs in my project, which is a web application written in Spring Roo (Java) with tons of JavaScript, simply by writing enough unit tests for JS. In my application, there is a lot of logic written in JS and that is the kind of thing that I am testing here. I am not concerned about how the page will actually look or if the animations plays as they should. I test if the modules I write in JS will execute the expected logic, if element classes are correctly assigned and if error conditions are well handled. For these tests, I've been using Jasmine. This is a great tool. It is very easy to learn and has nice mocking capabilities, which are called spies. Jasmine-jQuery adds more great functionality if you are using jQuery. In particular, it allows you to specify fixtures, which are snippets of the HTML code, so you don't have to manually mock the DOM. I have integrated this tool with maven and these tests are part of my CI strategy.
You have to be careful with UI tests, specially if you rely on record/playback tools like Selenium. Since the UI changes often, these tests keep breaking and you will spend a lot of time finding out if the tests really failed or if they are just outdated. Also, they don't add as much value as unit tests. Since they need an integrated environment to run, you will mostly like run them only after you finished developing, when the cost of fixing things is higher.
For smoke/regression tests, however, UI tests are very useful. If you need to automate these, then you should watch out for some dangers. Write your tests, don't record them. Recorded tests usually rely on automatically generated xpaths that break for every little change you do on your code. I believe Cucumber is a good framework for writing these tests and you can use it along with WebDriver to automate the browser interaction. Code thinking about tests. In UI tests, you will have to make elements easier to find so you don't have to rely on complex xpaths. Adding class and id elements where you usually wouldn't will be frequent. Don't write tests for every small corner case. These tests are expensive to write and take too long to run. You should focus on the cases that explore most of your functionality. If you write too many tests at this level you will probably test the same functionality that you have previously tested on your unit tests (supposing you have written them).
In my current project I am using Spock and Geb to write the UI tests. I find these tools amazing. They are written in Groovy, which suits better my Java project.
There are lots of options and tools for that. But their choice depends on whether you have a web UI or it's a desktop app?
Supposing from the tools you've mentioned it's Web UI. I would suggest Selenium (aka WebDriver): http://seleniumhq.org/docs/
There is a variety of languages it supports (Ruby is in the list). It can be run against a variety of browsers, ad it's quite easy to use with lots of tutorials and tips available.
Oh, and it's free, of course :)
I though as this post gets a lot of likes, I would post my answer to my question as I do write lots of tests now and how you test front end has moved on a lot now.
So in terms of FE testing I spent lot of time using karma with Jasmine, although karma will work nicely with other test suites like mocha & qunit. While these are great and karma allows you to interface directly with browsers to run your tests. The downside is as your test suite gets large it can become quite slow.
So recently I have moved to Jest which is much faster and if your writing react app, using enzyme with snap shot testing give you really good coverage. Talking of coverage Jest has Istanbul coverage built in and set up and mocking is really easy simple to use. The downside it doesn't test in browser and it using something called jsdom which is fast, but does have a few nuisances. Personally I don't find this a big deal particularly when I compile my code through webpack/babel which means the cross browser bugs are fairly few and far between, so generally isn't an issue if you manually test anyway (and imo you should).
In terms of working within the rails stack, this much easy now that the webpacker gem is now available and using npm and node is generally much more excepted. I would recommend using nvm to manage your node versions
While this isn't strictly testing, I would also recommend using linting as this also picks up a lot of issues in your code. For JS I use eslint with prettier and scss/css I use stylelint
In terms on what to test, I think as Carlos talks about the test pyramid is still relevant, after all the theory doesn't change, just the tools. I would also add to be practical about tests, I would always test, but to what level and coverage will depend on the project. It is important to manage your time and spending hours/days testing a short lifecycle project. Larger/longer term projects the benefits of a larger test suite is obviously greater.
Anyway I hope that helps people that look at the question.

Software testing with version control

So, as a developer, you probably write a small amount of code, and then test it to see if it works before you move onto something else. This is because you don't want to write thousands of lines of code and find that doesn't work. Stating the obvious here. So myself and a few others(soon) are working on a php application where I want to implement some form of version control, most likely subversion since we all know how to use it, somewhat. My question is how do I implement this writing process stated above with writing, and then testing.
My idea was to set each developer up with their own workstation including a web server, and php/mysql etc.. so they can checkout the repo and then test on their own computer as they are writing. I'm really looking for some direction here with that. Currently we aren't using version control as there are only two developers and we simply use a shared directory thats located on the web server. When we make changes, we can view them immediately on the web server. Any input on this? Whats the best way to handle multiple developers when in the development process of an application?
There are a number of different ways to approach this:
1) Each developer has a whole web server stack on their machine, deploys to it, and tests there, then checks in working code.
2) There's a separate test/integration machine. Developers take turns deploying to that machine, do their testing, then check in working code.
3) You use branches in Subversion. Development happens on a branch, and it's OK to check in broken code on a branch. There may be a branch for each developer, or a branch for each feature, or whatever. The developer checks in code onto the branch, checks it out on the separate test machine, tests, fixes, then checks the working code onto the trunk.
Which one is right depends on how big your team is and how complex your server setup is. Choose one that makes sense for your team.
You'd need to start thinking about a build server, using a piece of software like cruisecontrol which monitors for source code changes and is then able to build, run tests and deploy you're code (ed: in a manner as close to live like as possible!).
I'd highly recommend integrating a build server as soon as possible, otherwise you'll find out down the line that automating something that somebody has been doing manually for the last 5 years is somewhat difficult :)
You might also find that each dev ends up with their own methods of deployment and custom environment, it'd be far better to centralise this in one place and then have other devs use the scripts and from that deployment if they want to run the same process locally.
Configuration management is something you want to get right to begin with!
One important thing with CI: You only want to push working code to the central repository. This requires a private repository for each developer, but has the advantage that you never break trunk.
Git and Mercurial are the most obvious tools, and can work with svn as a central repository.
To prevent merge conflicts, there's one trick to prevent pushing broken code to central: always pull/merge from central first, and frequently, prior to pushing:
http://martinfowler.com/bliki/FeatureBranch.html
And have a look at our sponsor: http://hginit.com for examples of workflows with multiple developers.

Unit testing and iPhone development

I'm currently using OCUnit that ships with Xcode 3.2.4 for doing unit testing of my application. My workflow is often to set some break points in a failing unittestin order to quickly inspect the state. I'm using Apple's OCUnit setup:
http://developer.apple.com/library/ios/#documentation/Xcode/Conceptual/iphone_development/135-Unit_Testing_Applications/unit_testing_applications.html
but the setup from above gives me some headaches. Apple distinguish between Application tests and Logic tests. As I see it:
You cannot debug logic tests. It's as if they're invisibly run when you build your project.
You can debug application tests, but you have to run these on the device and not the simulator (what is the reason for this?)
This means that everything moves kind of slowly with my current workflow. Any hints on getting app tests to run on the simulator? Or any pin pointers to another test-framework?
Would eg. google-toolbox-for-mac work better in general or for my specific needs?
Also, general comments on using breakpoints in the unit tests are welcome! :-)
I have used the Google Toolbox testing rig in the past and it worked fine, it ran both on the Simulator and the device and I could debug my tests if I wanted to. Recently I got fed up with bundling so much code with each of my projects and tried the Apple way.
I also find the logic/app tests split weird, especially as I can’t find any way to debug the logic tests now. (And if you’re using some parts of AVFoundation that won’t build for Simulator, you are apparently out of luck with logic tests completely, which seems strange.) One of the pros is that I can run the logic tests quickly during build.
I guess this does not help you that much – the point is that you can debug the tests under GTM. And you might also want to check out this related question.
I know this isn't really a good answer, nor is it completely helpful to your cause. But I've been using NSLog to run my unit tests (if nothing outputs to the console, then success). When I comment out my tests method then the tests wouldn't run. I found this much more predictable and reliable than OCUnit. I'd much rather use a real true unit tester, but it was too frustrating to deal with the often strange errors that could occur from OCUnit and also the other shortfalls/lack of features you describe above.

Is Selenium a good piece of testing software to use? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
On my last project, I created some test cases through Selenium, then automated them so they would run on every build launched from hudson. It worked fantastic, and was consistent for about a month.
Then the tests started failing. It was, most times, timing issues which caused the failures. After about two weeks of effort put in over the course of the next two months, it was decided to drop the Selenium tests. They should have been passing, but the responses and timing of the web application were varying to the extent to which tests would fail when they should have passed.
Did you have a similar experience? Is Selenium still a good tool to use for Web Application testing?
Selenium is great tool for web testing, although it's important to make sure your tests are reliable. Timing issues are common, so I would suggest the following:
Make sure you set a sensible timeout value. I find between 1-2 minutes works well.
Don't have pauses in your tests - they are the main cause for timing issues. Instead use the waitFor* commands. The waitForCondition is very useful
Identify external calls that can cause timeouts and block that traffic from the machine running tests. You can do this on a firewall level or simply redirect the domain to localhost in your hosts file.
Update:
You should also consider using Selenium Grid. It wont directly help with your timeouts, but it can provide a quicker feedback loop for your failures. If you're using TestNG to run your tests you can get it to automatically rerun failures - this gives the tests failing due to timeouts a second chance.
At my previous job we investigated using it as a test tool but found it too fussy to bother integrating into our process. Pretty much the same experience as you.
This was two or three years ago in version 0.8 or so though, I would have expected it to get better since then.
I've had a similar experience. We created a project that would bootstrap a selenium proxy and run an automated suite of tests, but unfortunately it clashed with our build server in a huge way. There were too many browser inconsistencies and third party dependencies for us to reliably add it to our build. It was also too slow for us, and added too much time to our builds.
Most of the errors we would run into would be timeouts.
We ended up keeping the project and use it for integration tests on major releases. The bootstrapping code that we used has proved invaluable in other areas as well.
Probably best to be run after a nightly build when there's the time for it. It, or Watin, coulod be integrated with your build scripts.
Very much depends on your team, but if you've a small testing team this can be priceless for picking up some very obvious runtime issues.
I'd keep the scope modest and really use them for some sanity testing that at least each page can load.
I did have a similar experience with Selenium. We had a legacy system which we built a sort of testing framework around so that we could test the changes we were making. This worked great at the start but eventually some of the earlier tests began to fail (or take too long to run) so we started to turn off more and more of the tests.
To fix some of the issues we stopped selenium from opening and closing a browser for each test i.e. the tests were broken up into blocks and for each block of tests the browser would only be opened once. This reduced the time taken to run the tests from several hours to 30 minutes.
Despite the issues I think Selenium is a great tool for testing web-based applications. Many of the problems we experienced centered on the fact that the system we were testing was a legacy system. If you like test-driven development then Selenium fits in very well with that development practice.
EDIT:
Another good thing about Selenium is the ability to track what developer introduced the error as well as where the error is (source file). This makes life so much easier when it comes to fixing the error.
We initially tried to use selenium on our build machine but tests were very brittle and we found we spent a lot of time trying to keep old tests running when changes occurred to unrelated functionality accessed through the same set of pages. We were automating the tests through nunit.
I would use selenium more as a customer acceptance and integration testing tool. I'd agree with using it for a nightly build on functionality that is stable.
At a first glance, Selenium looks great. Unfortunately, as sometimes happens with open source projects, they rush to implement new features instead of making it more stable.