Test Automation architecture [closed] - testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
My company at the beginning of building Test Automation architecture.
There are different types of apps: windows desktop, web, mobile.
What would you experienced folks recommend to start from?
I mean resources.
Building whole system or construct something basic and enhancing in future?
Thanks a lot!

Start small. If you don't know what you need, build the smallest thing you can that adds value.
It's very likely that the first thing you build will not be what you need, and that you will need to scrap it and do something else.
Finally, don't try and test EVERYTHING. This is what I see fail over and over. Most automated test suites die under their own weight. Someone makes the decision that EVERYTHING must be tested, and so you build 10,000 tests around every CSS change. This then costs a fortune to update when the requirements change. And then you get the requirement to make the bar blue instead of red...
One of two things happen, either the tests get ignored, and the suite dies, or the business compromises what it wants because the tests cost so much to update. In the first case, the investment in tests was a complete waste, the second case is even more dangerous, it implies that the test suite is actually impeding progress, not assisting it.
Automate the most important tests. Find the most important workflows. The analysis of what to test should take more time than writing the tests themselves.
Finally, embrace the Pyramid of Tests.

Just as Rob Conklin said,
Start small
Identify the most important tests
Build your test automation architecture around these tests
Ensure your architecture allows for reusability and manageability
Build easily understandable report and error logs
Add Test Data Management to your architecture
Once you ensure all these, you can enhance later as you add new tests

in addition to what was already mentioned:
Make sure you have fast feedback from your automated tests. Ideally they should be executed after each commit to master branch.
Identify in which areas of your system test automation brings the biggest value.
Start from integration tests and leave end-to-end tests for a while
Try to keep every automated test very small and checking only one function
Prefer low level test interface like API, CLI over GUI.

I'm curious on what path you chose. We run UI automated tests for mobile, desktop applications, and web.
Always start small but building a framework is what I recommend as the first steps when facing this problem.
The approach we took is:
create mono repo
installed selenium webdriver for web
installed winapp driver for desktop
installed appium for mobile
created an api for each system
DesktopApi
WebApi
MobileApi
These APIs contain business functions that we share across teams.
This builds our framework to now write tests going across the different systems such as:
create a user on mobile device
enter a case for them in our desktop
application login on the web as the user and check balance
Before getting started on the framework it is always best to learn from others test automation mistakes.

Start with prioritizing which tests should be automated such as business critical features, repetitive tests that must be executed for every build or release (smoke tests, sanity tests, regression tests), data-driven tests, and stress and load testing. If your application supports different operating systems and browsers, it’s highly useful to automate tests early that verifies stability and proper page rendering.
In the initial stages of building your automation framework, keep the tests simple and gradually include more complex tests. And in all cases, the tests should be easily maintained, and you need to consider how you will debug errors, report on test results, scheduling tests, and bulk test runs.

Related

How to fit automation (System or E2E) tests in agile development lifecycle? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am an automation test engineer and never found a right answer on how to fit System Integration tests (E2E) in the agile development life cycle.
We are a team of 10 developers and 2 QAs. The team is currently trying to baseline a process around the best processes around verification & validation of user stories once they have been implemented.
The current process we are following is a mixture of both static reviews and manual / Automated tests.
This is how our process goes:
1. Whenever a story is ready, the lead conducts a story preparation meeting where we discuss the requirements, ensures everybody is on the same page, estimation etc;
2. The story comes onto the board and picked up by a developer
3. The story is implemented by the developer. The implementation includes necessary unit and integration tests as well.
4. The story will then go for a code review
5. Once the code review is passed, it will be deployed & released into production
6. If something goes wrong in production, the code will be reverted back.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved). The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
In such situations, we are compromising on quality and releasing the code without properly testing it.
What would be the best approach in this situation? Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
Any good suggestions around this process are highly appreciated.
Here are some suggestions.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved).
This is where you need to invest time and effort. Some possible approaches include:
Creating mock micro-services
Creating a test environment which runs versions of the micro-services
Both of these approaches will be challenging, but when solved will typically pay-off in the medium to long term.
Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
The value from automated regression tests comes when they have reasonable levels of coverage (say 50-70% of important features are covered). You may want to consider spending some time getting the coverage up before working on new requirements. This short-term hit on the team's output will be more than offset by:
Savings in time spent manually testing
More frequent running of tests (possibly using continuous integration) which improves quality
A greater confidence amongst the developers to make changes to the code and to refactor
The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
Why not get the developers involved in writing automation tests? This would allow you to balance the creating of tests with the coding of new requirements. This may give the appearance of reducing the output of the team, but the team will become increasingly efficient as the coverage improves.
We are a team of 10 developers and 2 QAs
I like to think you are a team of 12 with development and QA skills. Share knowledge and spread the workload until you have a team that can deliver requirements and quality.
For our team, we lose time, but after a development story is done the corresponding test auomation story is put in to the next sprint.
Finished stories are unit tested and run through the current test automation scripts to make sure we haven't regressed with our past tests/code.
Once the new tests are constructed, we run our completed code via HP UFT and if successful, setup for deployment to Production.
This probably isn't the best way to get things done currently, but it has been a way for us to make sure everything gets automated and tested before heading to Production.

what is smoke testing? And at what circumstances we can use smoke testing in our project [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I don't have a clear idea about smoke testing and sanity testing, some books say that both are same but some tester in some project called as a smoke testing and some tester in some project called as sanity testing, So please give me clear cut idea about my question.
Sorry but there is no clear-cut. Like you explain in your question there is no consensus on the definition, or at least on the difference between sanity and smoke.
Now about smoke tests (or sanity tests!), those are the tests that you can run quickly to get a general idea of how your System Under Test (SUT) behaves. For software testing, this will obviously contain some kind of installation, setup, playing around with the feature and shutdown. If nothing goes wrong, then you now you can go on with your testing. This provides a quick feedback to the team and avoid starting a longer test campaign only to realise that some major features are broken and the SUT is not really usable.
This definition stands for both manual and automated tests. For example, if you use Jenkins (for CI) and Robot Framework (for test automation), you could create 2 jobs in Jenkins: smoke tests and full tests (using tags, this is straightforward). Smoke test job could last a couple of minutes (or max 15 minutes let's say) and the full tests job could as long as needed. Thus the smoke test job gives you a quick feedback on the SUT build (if your smoke tets is a child project of the SUT build of course)
Smoke testing also known as Build version Testing.
Smoke testing is the initial testing process exercised to check whether the software under test is ready/stable for further testing.
sanity testing is a type of testing to check the capability of a new software version is able to perform well enough to accept it for a major testing effort.
Think of the analogy of testing a new electronic device. The first thing you do is turn it on to see if it starts smoking. If it does, there's something fundamentally wrong, so no additional testing either can be done or is worth doing.
For a website, the simplest smoke test is to go to the website and see if the http response is 200. If not, there's no point in testing further. A more useful smoke test might be to hit every page once.
Smoke tests should run as fast as possible. The goal is quick feedback so that you can make a decision.
As for the difference between smoke tests and sanity tests... There is no significant difference. What you call them is irrelevant, as long as everyone in your organization has the same basic understanding. What's important is a quick verification that the system under test is running and has no blatantly obvious flaws.
The smoke test is designed to see if the device seems to work at all. - This is to determine if we can go on with more extensive testing or if something fundamental is broken.
The sanity tests are designed to test the most frequent usecases.
Example:
You are testing a cellphone.
Smoketest - Does it start up without crashing/starting to smoke etc. does it seem to work good enough to perform more extensive testing?
Sanity test - Can you place/recieve calls/messages - the most basic and most used features.
These are both done often and should be quick to run through, they are NOT extensive tests.
Smoke Testing is testing the basic and critical features of an application, before going ahead and doing thorough testing of that application.
Note: Only if the Smoke testing passes, we can carry ahead with other stages of testing, else the product is not fit to be tested and should be sent to Development team.
Sanity Testing: There is no clear definition as such, but this one I picked up from the Internet
Check the entire application at the basic level, focuses on Breadth rather than length.

Manual vs. automated testing on large project with a small team (and little time) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I work in a small development team consisting of 5 programmers, of which none have any overall testing experience. The product we develop is a complex VMS, basically consisting of a (separate) video server and a client for viewing live and recorded video. Since video processing requires a lot of hardware power, the software is typically deployed on multiple servers.
We use a slimmed down version of feature driven development. Over the past few months a lot of features were implemented, leaving almost no time for the luxury of QA.
I'm currently researching a way for us to test our software as (time) efficient as possible. I'm aware of software methodologies build around testing, such as TDD. However, since many features are built around the distributed architecture, it is hard to write individual tests for individual features. Given that many of the features require some of the endless scenarios is which it can be deployed to be replicated in order to be tested properly.
For example, recently we developed a failover feature, in which one or more idle server will monitor other servers, and take its place in case of failure. Likely scenarios will include failover servers in a remote location or different subnet, or multiple failing servers at a time.
Manually setting up these scenarios takes a lot of valuable time. Even though I'm aware that manual initialization will always be required in this case, I cannot seem to find a way in which we can automate these kinds of tests (preferably defining them before implementing the feature) without having to invest an equal or greater amount of time in actually creating the automated tests.
Does anyone have any experience in a similar environment, or can tell me more about (automated) testing methodologies or techniques which are fit for such an environment? We are willing to overthrow our current development process if it enhances testing in a significant way.
Thanks in advance for any input. And excuse my grammar, as English not my first language :)
I approach test strategy by thinking of layers in a pyramid.
The first layer in the pyramid are your unit tests. I define unit tests as tests that exercise a single method of a class. Each and every class in your system should have a suite of tests associated with it. And each and every method should have a set of tests in included in that suite. These tests can and should exist in a mocked environment.
This is the foundation of testing and quality strategy. If you have solid test coverage here, a lot of issues will be nipped in the bud. These are the cheapest and easiest of all the tests you will be creating. You can get a tremendous bang for your buck here.
The next layer in the pyramid are your functional tests. I define functional tests as tests that exercise the classes in a module. This is where you are testing how various classes interact with one another. These tests can and should exist in a mocked environment.
The next layer up are your integration tests. I define integration tests as tests that exercise the interaction between modules. This is where you are testing how various modules interact with one another. These tests can and should exist in a mocked environment.
The next layer up is what I call behavioral or workflow tests. These are tests which exercise the system as would a customer. These are the most expensive and hardest tests to build and maintain, but they are critical. They confirm that the system works as a customer would expect it to work.
The top of your pyramid is exploratory testing. This is by definition a manual activity. This is where you have someone who knows how to use the system take it through its paces and work to identify issues. This is to a degree an art and requires a special personality. But it is invaluable to your overall success.
What I have described above, is just a part of what you will need to do. The next piece is setting up a Continuous Integration/Continuous Deployment (CI/CD) pipeline.
Whenever code is committed to one of your repos, and I do hope that you have a project as big as this broken up into separate repos, that component should undergo static analysis (i.e. lint it), be built, have tests executed against it, have code coverage data gathered.
Just the act of building each component of your system regularly, will help to flush out issues. Combine that with running unit/functional/integration tests against it and you are going to be identifying a lot of issues.
Once you have built a component, you should deploy it into a test or staging environment. This process must be automated and able to run unattended. I highly recommend you consider using Chef from Opscode for this process.
Once you have it deployed in a staging or test environment, you can start hitting it with workflow and behavioral tests.
I approach testing first by:
choosing P0/P1 test cases for functional and automated testing
choosing what framework I will use and why
getting tools and framework setup while doing testing manually for releases
build an MVP, at lease automating high priority test cases
after building a test suite of regression test cases that run on a daily basis.
Main thing is you have to start with MVP.

How should we automate system testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
We are building a large CRM system based on the SalesForce.com cloud. I am trying to put together a test plan for the system but I am unsure how to create system-wide tests. I want to use some behaviour-driven testing techniques for this, but I am not sure how I should apply them to the platform.
For the custom parts we will build in the system I plan to approach this with either Cucumber of SpecFlow driving Selenium actions on the UI. But for the SalesForce UI Customisations, I am not sure how deep to go in testing. Customisations such as Workflows and Validation Rules can encapsulate a lot of complex logic that I feel should be tested.
Writing Selenium tests for this out-of-box functionality in SalesForce seems overly burdensome for the value. Can you share your experiences on System testing with the SalesForce.com platform and how should we approach this?
That is the problem with detailed test plan up front. You trying to guess what kind of errors, how many, and in what areas you will get. This may be tricky.
Maybe you should have overall Master Test Plan specifying only test strategy, main tool set, risks, relative amount of how much testing you want to put in given areas (based on risk).
Then when you starting to work on given functionality or iteration (I hope you are doing this in iterations not waterfall), you prepare detailed test plan for this set of work. You adjust your tools/estimates/test coverage based on experiences from previous parts.
This way you can say at the beginning what is your general approach and priorities, but you let yourself adapt later as project progresses.
Question about how much testing you need to put into testing COTS is the same as with any software: you need to evaluate the risk.
If your software need to be
Validated because of external
regulations (FDA,DoD..)
you will need to go deep with your
tests, almost test entire app. One
problem here may be ensuring
external regulator, that tools you
used for validation are validated
(and that is a troublesome).
If your application is
mission-critical for your company,
than you still need to do a lot of
testing based on extensive risk
analysis.
If your application is not concerned
with all above, you can go with
lighter testing. Probably you can
skip functionality that was tested
by platform manufacturer, and focus
on your customisations. On the other
hand I would still write tests (at
least happy paths) for
workflows you will be using in your
business processes.
When we started learning Selenium testing in 2008 we created Recruiting application from SalesForce handbook and created a suite of tests and described our path step by step in our blog. It may help you get started if you decide to write Selenium code to test your app.
I believe the problem with SalesForce is you have Unit and UI testing, but no Service-level testing. The SpecFlow I've seen which drives Selenium UI is brittle and doesn't encapsulate what I'm after in engineering a service-level test solution:
( When I navigate to "/Selenium-Testing-Cookbook-Gundecha-Unmesh/dp/1849515743"
And I click the 'buy now' button
And then I click the 'proceed to checkout' button)
That is not the spirit or intent of Specflow.
Given I have not selected a product
When I select Proceed to Checkout
Then ensure I am presented with a message
In order to test that with selenium, you essentially have to translate that to clicks and typing, whereas in the .NET realm, you can instantiate objects, etc., in the middle-tier, and perform hundreds of instances and derivations against the same BACKGROUND (mock setup).
I'm told that you can expose SF through an API at some security risk. I'd love to find more about THAT.

What is the difference between integration testing and functional testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
The community reviewed whether to reopen this question 6 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Are functional testing and integration testing the same?
You begin your testing through unit testing, then after completing unit testing you go for integration testing where you test the system as a whole. Is functional testing the same as integration testing? You still taking the system as a whole and testing it for functionality conformance.
Integration testing is when you test more than one component and how they function together. For instance, how another system interacts with your system, or the database interacts with your data abstraction layer. Usually, this requires a fully installed system, although in its purest forms it does not.
Functional testing is when you test the system against the functional requirements of the product. Product/Project management usually writes these up and QA formalizes the process of what a user should see and experience, and what the end result of those processes should be. Depending on the product, this can be automated or not.
Functional Testing:
Yes, we are testing the product or software as a whole functionally whether it is functionally working properly or not (testing buttons, links etc.)
For example: Login page.
you provide the username and password, you test whether it is taking you to home page or not.
Integration Testing:
Yes, you test the integrated software only but you test where the data flow is happening and is there any changes happening in the database.
For example: Sending e-mail
You send one mail to someone, there is a data flow and also change in database (the sent table increases value by 1)
Remember - clicking links and images is not integration testing. Hope you understood why, because there is no change in database by just clicking on a link.
Hope this helped you.
Functional Testing: It is a process of testing where each and every component of the module is tested. Eg: If a web page contains text field, radio botton, Buttons and Drop down etc components needed to be checked.
Integration Testing: Process where the dataflow between 2 modules are checked.
This is an important distinction, but unfortunately you will never find agreement. The problem is that most developers define these from their own point of view. It's very similar to the debate over Pluto. (If it were closer to the Sun, would it be a planet?)
Unit testing is easy to define. It tests the CUT (Code Under Test) and nothing else. (Well, as little else as possible.) That means mocks, fakes, and fixtures.
At the other end of the spectrum there is what many people call system integration testing. That's testing as much as possible, but still looking for bugs in your own CUT.
But what about the vast expanse between?
For example, what if you test just a little bit more than the CUT? What if you include a Fibonacci function, instead of using a fixture which you had injected? I would call that functional testing, but the world disagrees with me.
What if you include time() or rand()? Or what if you call http://google.com? I would call that system testing, but again, I am alone.
Why does this matter? Because system-tests are unreliable. They are necessary, but they will sometimes fail for reasons beyond your control. On the other hand, functional tests should always pass, not fail randomly; if they are fast, they might as well be used from the start in order to use Test-Driven Development without writing too many tests for your internal implementation. In other words, I think that unit-tests can be more trouble than they are worth, and I have good company.
I put tests on 3 axes, with all their zeroes at unit-testing:
Functional-testing: using real code deeper and deeper down your call-stack.
Integration-testing: higher and higher up your call-stack; in other words, testing your CUT by running the code which would use it.
System-testing: more and more unrepeatable operations (O/S scheduler, clock, network, etc.)
A test can easily be all 3, to varying degrees.
I would say that both are tightly linked to each other and very tough to distinguish between them.
In my view, Integration testing is a subset of functional testing.
Functionality testing is based on the initial requirements you receive. You will test the application behaviour is as expected with the requirements.
When it comes to integration testing, it is the interaction between modules. If A module sends an input, B module able to process it or not.
Integration testing - Integration testing is nothing but the testing of different modules. You have to test relationship between modules. For ex you open facebook then you see login page after entering login id and password you can see home page of facebook hence login page is one module and home page is another module. you have to check only relationship between them means when you logged in then only home page must be open not message box or anything else. There are 2 main types of integration testing TOP-DOWN approach and BOTTOM UP approach.
Functional Testing - In functional testing you have to only think about input and output. In this case you have to think like a actual user. Testing of What input you gave and what output you got is Functional testing. you have to only observe output. In functional testing you don't need to test coding of application or software.
In a Functional testing tester focuses only Functionality and sub functionality of application. Functionality of app should be working properly or not.
In integration testing tester have to check dependency between modules or sub-modules.Example for modules records should be fetching and displayed correctly in another module.
Integration Test:-
When Unit testing is done and issues are resolved to the related components then all the required components need to integrate under the one system so that it can perform an operation.
After combining the components of the system,To test that whether the system is working properly or not,this kind of testing is called as Integration Testing.
Functional Testing:-
The Testing is mainly divided into two categories as
1.Functional Testing
2.Non-Functional Testing
**Functional Testing:-
To test that whether the software is working according to the requirements of the user or not.
**Non-Functional Testing:-
To test that whether the software satisfies the quality criteria like Stress Test,Security test etc.
Usually,Customer will provide the requirements for Functional Test only and for Non Functional test,Requirements should not be mentioned but the application necessarily perform those activity.
Integration testing It can be seen as how the different modules of the system work together.
We mostly refers to the integrated functionality of the different modules, rather different components of the system.
For any system or software product to work efficiently, every component has to be in sync with each other.
Most of the time tool we used for integration testing will be chosen that we used for unit testing.
It is used in complex situations, when unit testing proves to be insufficient to test the system.
Functional Testing
It can be defined as testing the individual functionality of modules.
It refers to testing the software product at an individual level, to check its functionality.
Test cases are developed to check the software for expected and unexpected results.
This type of testing is carried out more from a user perspective. That is to say, it considers the expectation of the user for a type of input.
It is also referred as black-box testing as well as close-box testing
Checking the functionality of the application is generally known as functional testing, where as the integration testing is to check the flow of data from one module to other.
Lets take example of money transfer app.Suppose we have page in which we enter all the credentials and if we press transfer button and after that if we getting any success, Then this is functional testing. But in same example if we verify the amount transfer then it is integration testing.
Authors diverge a lot on this. I don't believe there is "the" correct interpretation for this. It really depends.
For example: most Rails developers consider unit tests as model tests, functional tests as controller tests and integration tests as those using something like Capybara to explore the application from a final user's perspective - that is, navigating through the page's generated HTML, using the DOM to check for expectations.
There is also acceptance tests, which in turn are a "live" documentation of the system (usually they use Gherkin to make it possible to write those in natural language), describing all of the application's features through multiple scenarios, which are in turn automated by a developer. Those, IMHO, could be also considered as both, functional tests and integration tests.
Once you understand the key concept behind each of those, you get to be more flexible regarding the right or wrong. So, again IMHO, a functional test could also be considered an integration test. For the integration test, depending on the kind of integration it's exercising, it may not be considerate a functional test - but you generally have some requirements in mind when you are writing an integration test, so most of the time it can be also considerate as a functional test.