Logging in automation testing - testing

Is there any reason to use logging in automated tests? I am asking because I have understanding that test must be readable and you shouldn't use any logging to bloat the code. It is also used to understand what is going on in app, so if it fails I know why (assert message) and if not - ok, I don't care what is in the test.
Thank you in advance.

You are right for this statement
test must be readable
since test code is production code. And the quality should be high, if not the same. But IMHO
shouldn't use any logging to bloat the code.
is not correct. If you run your automation tests on remote (physical or VM) server, you need some way to understand what happened in each step, where are the errors and warnings.
How do you troubleshoot or reproduce failed test only using stack trace and the latest Assertion message? Logging helps you avoid the common cause of Obscure Test - the Mystery Guest. You should be able to see the cause and effect between fixture and verification logic, without too much effort. Let's take a look at the Log4j home page
Logging equips the developer with detailed context for application failures. On the other hand, testing provides quality assurance and confidence in the application. Logging and testing should not be confused. They are complementary. When logging is wisely used, it can prove to be an essential tool.
I hope till now I've managed to convince at least one person that logging is a fundamental part of the automation tests.

Related

Should Cypress Automation testing be written for local environment or staging?

I am a beginner in Cypress Automation Testing. I have one confusion. When we need to add our Automation scripts to run with GitHub workflows to trigger when we push a commit, for what environment should we write tests? In the local environment at localhost or for the staging site of the project?
Could anyone please get my confusion cleared on this Automation Testing and how it should be written and How can we add Cypress Automation Tests with GitHub CI/CD?
Thanks.
Ok, let me give this a shot. Of course, I do not know the exact setup of the project that you are working on, but let me give you some pointers, so you can decide for yourself what works best in your setting.
My answer is based on the assumption that you are building an automated regression test set in Cypress with the primary goal to prevent production incidents. In addition, it aims save you tons of 'manual testing' for each release to production because you want to make sure everything is still working properly.
First of all, you want your automated tests to run on a stable environment(*). If the environment is not stable, many tests will fail for many reasons, and those are usually not the right ones. You'll spend more time figuring out why your tests are failing, than actually catching issues with it. This makes a local, dev environment not really suited for the task, so I would not pick a localhost environment for this. Especially not when you have multiple developers working in your team, each with their own localhost.
A test environment is already a way more stable environment. You want your tests to only fail when you have an actual issue on your hands. As a rule of thumb, the 'higher' you go, the more stable.
Second, you want to catch the issues early in the game, so I would definitely make sure that the tests can run on the environment where all code comes together for the first time (in other words, the environment that has the master branch or whatever your team calls that branch). This is usually the test environment. In my projects, I initially build the set for this environment, and ideally, I run it daily. Your tests won't always pass here (bonus if they do), and that is OK... as long as you understand why they don't ;-)
Some things to keep in mind are integrations or connecting systems, and whether you need those for your tests to pass. In general, you don't want to be (too) dependent on (third-party) integrations for you test cases to go green. Sometimes, when those integrations are vital to the process that you need to test, it is inevitable. However, integrations are often not (fully) set up on test/lower environments. There are workarounds for this, like stubs, but let's not get into that now - that's a whole different topic.
Third, you want your tests to run on a production-like environment on the code exactly in the state that it goes to production. This is usually the acceptance, staging or pre-production environment, i.e. the last one before production. These environments often have all integrations in place and are often very similar to production. If you find an issue here, it's almost guaranteed that it is also an issue in production. This is IMO where you want to integrate your tests into your CI/CD pipeline. Ideally, your full automated set is in the pipeline, but in practice, you should only add the tests that are stable and robust, otherwise your production deployments will be blocked very often.
So, long story short, my advice: write your tests for your test environment, where you do your 'manual testing' (I hate that term BTW, all testing is manual... as if there is such a thing as 'manual coding') and run it early and often. Then put the stable ones in the pipeline of the production deployment. If you only have local, staging and production, it should be staging.
If your developers want to run the set on their local environments, they can still do that - you can share the tests with them or even better, they can take it from the repository and run it locally - but I don't think you should make it part of the deployment process always and everywhere. It will slow down your process massively.
You can work with environment variables to easily switch for the environment where you want to run your tests: https://docs.cypress.io/guides/guides/environment-variables#Setting
I hope this helps. I'm looking forward to read what others have to say about this, too.
Happy Testing!
Jackie
PS. I see that you also asked about how to add Cypress to your CI/CD pipeline. I think that should be a completely separate topic. It is also way too high level to answer. Maybe it's best to start here: https://docs.cypress.io/guides/continuous-integration/introduction#What-you-ll-learn
(*) I'm talking stable environment here, but this also includes stable code and even a stable application. If your application and code is in a very early stage, really ask yourself whether you already want to start automating your functional UI tests in Cypress - chances are that many things will change (many times) and you'll spend hours updating your tests. Maybe it is better to only think about the scenarios that you want to automate at that stage of the project.

Understand Heap-dump and Thread-dump for Large-scale application

I go through some tutorials of Java Profilers (JVisualVM, JProfiler, YourKit) on Youtube as well as Pluralsight. I got a little bit idea regarding how to check heap-dump and how to find the memory leak. But these all tutorials are elementary.
My query is, when I analyse in heap-dump, I saw only 3 types of objects char[], java.lang.String and java.lang.Object[] which covers almost all memory(more than 70% always). But none from my application.
And the same way for thread dump, I saw HTTP-8080 request (the port I am using) and that leads me to Runnable()'s run method or Java Concurrent Package and again not to any specific code to my project.
I also discussed the problem with some of my friends and analyse their application as well (which doesn't face any issues regarding memory leak and performance), but their results are almost the same.
Would you guys help to understand how to analyse heap-dump and thread-dump in JVisualVM for the large scale application? Any video, blog, anything would be helpful.
I am using OpenJDK-11, AWS ECS-Docker and Tomcat as a web-server.
Checkout the Eclipse Memory Analyzer (https://www.eclipse.org/mat/), I used it in the past several times to successfully find memory leaks, but it takes some time to get familiar with it.
Another advise I can give you is to create benchmark tests with Apache JMeter (https://jmeter.apache.org/) or another tool that lets you reproduce the performance/memory issue and identify the execution path that cause you problems.
Be aware AWS doesn't like when someone execute performance/penetration tests against their services (https://aws.amazon.com/aup/)

What is a smoke test? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
What is a smoke testing and what will it do for me?
What is a sanity test/check
What is a smoke test and why is it useful?
"Smoke test" is a term that comes from electrical engineering. It refers to a very basic, very simple test, where you just plug the device in and see if smoke comes out.
It doesn't tell you anything about whether or not the device actually works. The only thing it tells you is that it is not completely broken.
It is usually used as a time-saving step before the more thorough regression/integration/acceptance tests, since there is no point in running the full testsuite, if the thing catches fire anyway.
In programming, the term is used in the same sense: it is a very shallow, simple test that checks for some simple properties like: does the program even start? Can it print its help message? If those simple things don't work, there is no point in even attempting to run the full testsuite, which can sometimes take minutes or even hours.
A smoke test is basically just a sanity check to see if the software functions on the most basic level.
If your smoke test fails, it means there is no point in running your other functional tests.
A smoke test is a quick, lightweight test or set of tests, usually automated, that confirm that the basic functionality of system is working correctly. It tends to emphasize broad, not deep tests, and is usually done before launching a more extensive set of tests.
From wikipedia:
It refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail. After a smoke test proves that "the pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright," the assembly is ready for more stressful testing.
In computer programming and software
testing, smoke testing is a
preliminary to further testing, which
should reveal simple failures severe
enough to reject a prospective
software release. In this case, the
smoke is metaphorical.
Smoke test is a quick test to see if the application will "catch on fire" when you run it the first time. In other words, that the application isn't broken.

Unit testing and iPhone development

I'm currently using OCUnit that ships with Xcode 3.2.4 for doing unit testing of my application. My workflow is often to set some break points in a failing unittestin order to quickly inspect the state. I'm using Apple's OCUnit setup:
http://developer.apple.com/library/ios/#documentation/Xcode/Conceptual/iphone_development/135-Unit_Testing_Applications/unit_testing_applications.html
but the setup from above gives me some headaches. Apple distinguish between Application tests and Logic tests. As I see it:
You cannot debug logic tests. It's as if they're invisibly run when you build your project.
You can debug application tests, but you have to run these on the device and not the simulator (what is the reason for this?)
This means that everything moves kind of slowly with my current workflow. Any hints on getting app tests to run on the simulator? Or any pin pointers to another test-framework?
Would eg. google-toolbox-for-mac work better in general or for my specific needs?
Also, general comments on using breakpoints in the unit tests are welcome! :-)
I have used the Google Toolbox testing rig in the past and it worked fine, it ran both on the Simulator and the device and I could debug my tests if I wanted to. Recently I got fed up with bundling so much code with each of my projects and tried the Apple way.
I also find the logic/app tests split weird, especially as I can’t find any way to debug the logic tests now. (And if you’re using some parts of AVFoundation that won’t build for Simulator, you are apparently out of luck with logic tests completely, which seems strange.) One of the pros is that I can run the logic tests quickly during build.
I guess this does not help you that much – the point is that you can debug the tests under GTM. And you might also want to check out this related question.
I know this isn't really a good answer, nor is it completely helpful to your cause. But I've been using NSLog to run my unit tests (if nothing outputs to the console, then success). When I comment out my tests method then the tests wouldn't run. I found this much more predictable and reliable than OCUnit. I'd much rather use a real true unit tester, but it was too frustrating to deal with the often strange errors that could occur from OCUnit and also the other shortfalls/lack of features you describe above.

Is Selenium a good piece of testing software to use? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
On my last project, I created some test cases through Selenium, then automated them so they would run on every build launched from hudson. It worked fantastic, and was consistent for about a month.
Then the tests started failing. It was, most times, timing issues which caused the failures. After about two weeks of effort put in over the course of the next two months, it was decided to drop the Selenium tests. They should have been passing, but the responses and timing of the web application were varying to the extent to which tests would fail when they should have passed.
Did you have a similar experience? Is Selenium still a good tool to use for Web Application testing?
Selenium is great tool for web testing, although it's important to make sure your tests are reliable. Timing issues are common, so I would suggest the following:
Make sure you set a sensible timeout value. I find between 1-2 minutes works well.
Don't have pauses in your tests - they are the main cause for timing issues. Instead use the waitFor* commands. The waitForCondition is very useful
Identify external calls that can cause timeouts and block that traffic from the machine running tests. You can do this on a firewall level or simply redirect the domain to localhost in your hosts file.
Update:
You should also consider using Selenium Grid. It wont directly help with your timeouts, but it can provide a quicker feedback loop for your failures. If you're using TestNG to run your tests you can get it to automatically rerun failures - this gives the tests failing due to timeouts a second chance.
At my previous job we investigated using it as a test tool but found it too fussy to bother integrating into our process. Pretty much the same experience as you.
This was two or three years ago in version 0.8 or so though, I would have expected it to get better since then.
I've had a similar experience. We created a project that would bootstrap a selenium proxy and run an automated suite of tests, but unfortunately it clashed with our build server in a huge way. There were too many browser inconsistencies and third party dependencies for us to reliably add it to our build. It was also too slow for us, and added too much time to our builds.
Most of the errors we would run into would be timeouts.
We ended up keeping the project and use it for integration tests on major releases. The bootstrapping code that we used has proved invaluable in other areas as well.
Probably best to be run after a nightly build when there's the time for it. It, or Watin, coulod be integrated with your build scripts.
Very much depends on your team, but if you've a small testing team this can be priceless for picking up some very obvious runtime issues.
I'd keep the scope modest and really use them for some sanity testing that at least each page can load.
I did have a similar experience with Selenium. We had a legacy system which we built a sort of testing framework around so that we could test the changes we were making. This worked great at the start but eventually some of the earlier tests began to fail (or take too long to run) so we started to turn off more and more of the tests.
To fix some of the issues we stopped selenium from opening and closing a browser for each test i.e. the tests were broken up into blocks and for each block of tests the browser would only be opened once. This reduced the time taken to run the tests from several hours to 30 minutes.
Despite the issues I think Selenium is a great tool for testing web-based applications. Many of the problems we experienced centered on the fact that the system we were testing was a legacy system. If you like test-driven development then Selenium fits in very well with that development practice.
EDIT:
Another good thing about Selenium is the ability to track what developer introduced the error as well as where the error is (source file). This makes life so much easier when it comes to fixing the error.
We initially tried to use selenium on our build machine but tests were very brittle and we found we spent a lot of time trying to keep old tests running when changes occurred to unrelated functionality accessed through the same set of pages. We were automating the tests through nunit.
I would use selenium more as a customer acceptance and integration testing tool. I'd agree with using it for a nightly build on functionality that is stable.
At a first glance, Selenium looks great. Unfortunately, as sometimes happens with open source projects, they rush to implement new features instead of making it more stable.