How should we automate system testing? [closed] - testing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
We are building a large CRM system based on the SalesForce.com cloud. I am trying to put together a test plan for the system but I am unsure how to create system-wide tests. I want to use some behaviour-driven testing techniques for this, but I am not sure how I should apply them to the platform.
For the custom parts we will build in the system I plan to approach this with either Cucumber of SpecFlow driving Selenium actions on the UI. But for the SalesForce UI Customisations, I am not sure how deep to go in testing. Customisations such as Workflows and Validation Rules can encapsulate a lot of complex logic that I feel should be tested.
Writing Selenium tests for this out-of-box functionality in SalesForce seems overly burdensome for the value. Can you share your experiences on System testing with the SalesForce.com platform and how should we approach this?

That is the problem with detailed test plan up front. You trying to guess what kind of errors, how many, and in what areas you will get. This may be tricky.
Maybe you should have overall Master Test Plan specifying only test strategy, main tool set, risks, relative amount of how much testing you want to put in given areas (based on risk).
Then when you starting to work on given functionality or iteration (I hope you are doing this in iterations not waterfall), you prepare detailed test plan for this set of work. You adjust your tools/estimates/test coverage based on experiences from previous parts.
This way you can say at the beginning what is your general approach and priorities, but you let yourself adapt later as project progresses.
Question about how much testing you need to put into testing COTS is the same as with any software: you need to evaluate the risk.
If your software need to be
Validated because of external
regulations (FDA,DoD..)
you will need to go deep with your
tests, almost test entire app. One
problem here may be ensuring
external regulator, that tools you
used for validation are validated
(and that is a troublesome).
If your application is
mission-critical for your company,
than you still need to do a lot of
testing based on extensive risk
analysis.
If your application is not concerned
with all above, you can go with
lighter testing. Probably you can
skip functionality that was tested
by platform manufacturer, and focus
on your customisations. On the other
hand I would still write tests (at
least happy paths) for
workflows you will be using in your
business processes.

When we started learning Selenium testing in 2008 we created Recruiting application from SalesForce handbook and created a suite of tests and described our path step by step in our blog. It may help you get started if you decide to write Selenium code to test your app.

I believe the problem with SalesForce is you have Unit and UI testing, but no Service-level testing. The SpecFlow I've seen which drives Selenium UI is brittle and doesn't encapsulate what I'm after in engineering a service-level test solution:
( When I navigate to "/Selenium-Testing-Cookbook-Gundecha-Unmesh/dp/1849515743"
And I click the 'buy now' button
And then I click the 'proceed to checkout' button)
That is not the spirit or intent of Specflow.
Given I have not selected a product
When I select Proceed to Checkout
Then ensure I am presented with a message
In order to test that with selenium, you essentially have to translate that to clicks and typing, whereas in the .NET realm, you can instantiate objects, etc., in the middle-tier, and perform hundreds of instances and derivations against the same BACKGROUND (mock setup).
I'm told that you can expose SF through an API at some security risk. I'd love to find more about THAT.

Related

How to fit automation (System or E2E) tests in agile development lifecycle? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am an automation test engineer and never found a right answer on how to fit System Integration tests (E2E) in the agile development life cycle.
We are a team of 10 developers and 2 QAs. The team is currently trying to baseline a process around the best processes around verification & validation of user stories once they have been implemented.
The current process we are following is a mixture of both static reviews and manual / Automated tests.
This is how our process goes:
1. Whenever a story is ready, the lead conducts a story preparation meeting where we discuss the requirements, ensures everybody is on the same page, estimation etc;
2. The story comes onto the board and picked up by a developer
3. The story is implemented by the developer. The implementation includes necessary unit and integration tests as well.
4. The story will then go for a code review
5. Once the code review is passed, it will be deployed & released into production
6. If something goes wrong in production, the code will be reverted back.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved). The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
In such situations, we are compromising on quality and releasing the code without properly testing it.
What would be the best approach in this situation? Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
Any good suggestions around this process are highly appreciated.
Here are some suggestions.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved).
This is where you need to invest time and effort. Some possible approaches include:
Creating mock micro-services
Creating a test environment which runs versions of the micro-services
Both of these approaches will be challenging, but when solved will typically pay-off in the medium to long term.
Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
The value from automated regression tests comes when they have reasonable levels of coverage (say 50-70% of important features are covered). You may want to consider spending some time getting the coverage up before working on new requirements. This short-term hit on the team's output will be more than offset by:
Savings in time spent manually testing
More frequent running of tests (possibly using continuous integration) which improves quality
A greater confidence amongst the developers to make changes to the code and to refactor
The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
Why not get the developers involved in writing automation tests? This would allow you to balance the creating of tests with the coding of new requirements. This may give the appearance of reducing the output of the team, but the team will become increasingly efficient as the coverage improves.
We are a team of 10 developers and 2 QAs
I like to think you are a team of 12 with development and QA skills. Share knowledge and spread the workload until you have a team that can deliver requirements and quality.
For our team, we lose time, but after a development story is done the corresponding test auomation story is put in to the next sprint.
Finished stories are unit tested and run through the current test automation scripts to make sure we haven't regressed with our past tests/code.
Once the new tests are constructed, we run our completed code via HP UFT and if successful, setup for deployment to Production.
This probably isn't the best way to get things done currently, but it has been a way for us to make sure everything gets automated and tested before heading to Production.

How can a change be brought about in the testing process that follows waterfall? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
We are a small company and i am a test coordinator appointed to bring a process in testing for the company.
We dont have a testing process in place. Development-Deployment and testing happens almost daily and the communication is established over skype or mails.
How do i start to bring a testing process in place?
We have operations running in 8 different countries and we dont have a dedicated testing team for testing. The business users are the testers we have.
It is crutial for me to get them all to testing when required.
So how do i bring that change in the way they work?
Any suggestions or help is kindly appreciated.
I think the best approach for this changing is show the test value for your managers.
I suppose that without well organized test process the bug finding happens eventually. The value of one crucial issue find by your customer but not by you may lead to the huge impact on the company business. Well, you may wait when it will happen or just start to build the test group.
Also this is the common fact that finding bugs as soon as possible saves a lot of money for the organization. This is mostly because fixing the issue close to the development time requires much less time.
I would recommend Jira as the open source tool which allow to organize the bugs tracking and also supports agile development process.
I would suggest to consider Comindware Tracker - workflow automation software. It executes processes you create automatically by assigning tasks to the right team member, only after the previous step in the workflow is completed. Furthermore, you can create forms visually, set your own workflow rules and have your data processed automatically. You can configure Comindware Tracker to send e-mail notifications to users when a particular event occurs with a task or document, or to send scheduled e-mail reports. Discussion threads are available within every task. You can share a document with a team and it will be stored within the task, document versioning is supported.
Perhaps the key reason why small company just starting to optimize workflows should consider Comindware Tracker is its ability in changing workflows in real-time during process execution without the need to interrupt it. As you are likely to make plenty of changes during the course of your starting phase, this solution is worth of attention. This product review might be useful - http://www.brighthubpm.com/software-reviews-tips/127913-comindware-tracker-review/
Disclaimer – I work in Comindware. We use Comindware Tracker to manage workflows within our company. I will be glad to answer any question about the solution, if any should rise.
If you are looking to release frequently then you should consider using automated regression testing.
This would involve having an automated test for every bit of significant functionality in your applications. In addition, when new functionality is being developed an automated regression test would be written at the same time.
The benefit of the automated regression test approach is that you can get the regression tests running in continuous integration. This allows you to continuosly regression test and uncover any regression bugs soon after the code is written.
Manual regression testing is very difficult to sustain. As you add more and more functionality to the applications the manual regression testing takes longer and makes it very difficult to release frequently. It also means the time spent testing will continually increase.
If your organisation decides not to go with test automation then I would suggest you need to create a delivery pipeline that includes a manual regression testing phase. You might want to consider using an agile framework such as Kanban for this (which typically works well with frequent releases).

Test Automation architecture [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
My company at the beginning of building Test Automation architecture.
There are different types of apps: windows desktop, web, mobile.
What would you experienced folks recommend to start from?
I mean resources.
Building whole system or construct something basic and enhancing in future?
Thanks a lot!
Start small. If you don't know what you need, build the smallest thing you can that adds value.
It's very likely that the first thing you build will not be what you need, and that you will need to scrap it and do something else.
Finally, don't try and test EVERYTHING. This is what I see fail over and over. Most automated test suites die under their own weight. Someone makes the decision that EVERYTHING must be tested, and so you build 10,000 tests around every CSS change. This then costs a fortune to update when the requirements change. And then you get the requirement to make the bar blue instead of red...
One of two things happen, either the tests get ignored, and the suite dies, or the business compromises what it wants because the tests cost so much to update. In the first case, the investment in tests was a complete waste, the second case is even more dangerous, it implies that the test suite is actually impeding progress, not assisting it.
Automate the most important tests. Find the most important workflows. The analysis of what to test should take more time than writing the tests themselves.
Finally, embrace the Pyramid of Tests.
Just as Rob Conklin said,
Start small
Identify the most important tests
Build your test automation architecture around these tests
Ensure your architecture allows for reusability and manageability
Build easily understandable report and error logs
Add Test Data Management to your architecture
Once you ensure all these, you can enhance later as you add new tests
in addition to what was already mentioned:
Make sure you have fast feedback from your automated tests. Ideally they should be executed after each commit to master branch.
Identify in which areas of your system test automation brings the biggest value.
Start from integration tests and leave end-to-end tests for a while
Try to keep every automated test very small and checking only one function
Prefer low level test interface like API, CLI over GUI.
I'm curious on what path you chose. We run UI automated tests for mobile, desktop applications, and web.
Always start small but building a framework is what I recommend as the first steps when facing this problem.
The approach we took is:
create mono repo
installed selenium webdriver for web
installed winapp driver for desktop
installed appium for mobile
created an api for each system
DesktopApi
WebApi
MobileApi
These APIs contain business functions that we share across teams.
This builds our framework to now write tests going across the different systems such as:
create a user on mobile device
enter a case for them in our desktop
application login on the web as the user and check balance
Before getting started on the framework it is always best to learn from others test automation mistakes.
Start with prioritizing which tests should be automated such as business critical features, repetitive tests that must be executed for every build or release (smoke tests, sanity tests, regression tests), data-driven tests, and stress and load testing. If your application supports different operating systems and browsers, it’s highly useful to automate tests early that verifies stability and proper page rendering.
In the initial stages of building your automation framework, keep the tests simple and gradually include more complex tests. And in all cases, the tests should be easily maintained, and you need to consider how you will debug errors, report on test results, scheduling tests, and bulk test runs.

What is the difference between integration testing and functional testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
The community reviewed whether to reopen this question 6 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Are functional testing and integration testing the same?
You begin your testing through unit testing, then after completing unit testing you go for integration testing where you test the system as a whole. Is functional testing the same as integration testing? You still taking the system as a whole and testing it for functionality conformance.
Integration testing is when you test more than one component and how they function together. For instance, how another system interacts with your system, or the database interacts with your data abstraction layer. Usually, this requires a fully installed system, although in its purest forms it does not.
Functional testing is when you test the system against the functional requirements of the product. Product/Project management usually writes these up and QA formalizes the process of what a user should see and experience, and what the end result of those processes should be. Depending on the product, this can be automated or not.
Functional Testing:
Yes, we are testing the product or software as a whole functionally whether it is functionally working properly or not (testing buttons, links etc.)
For example: Login page.
you provide the username and password, you test whether it is taking you to home page or not.
Integration Testing:
Yes, you test the integrated software only but you test where the data flow is happening and is there any changes happening in the database.
For example: Sending e-mail
You send one mail to someone, there is a data flow and also change in database (the sent table increases value by 1)
Remember - clicking links and images is not integration testing. Hope you understood why, because there is no change in database by just clicking on a link.
Hope this helped you.
Functional Testing: It is a process of testing where each and every component of the module is tested. Eg: If a web page contains text field, radio botton, Buttons and Drop down etc components needed to be checked.
Integration Testing: Process where the dataflow between 2 modules are checked.
This is an important distinction, but unfortunately you will never find agreement. The problem is that most developers define these from their own point of view. It's very similar to the debate over Pluto. (If it were closer to the Sun, would it be a planet?)
Unit testing is easy to define. It tests the CUT (Code Under Test) and nothing else. (Well, as little else as possible.) That means mocks, fakes, and fixtures.
At the other end of the spectrum there is what many people call system integration testing. That's testing as much as possible, but still looking for bugs in your own CUT.
But what about the vast expanse between?
For example, what if you test just a little bit more than the CUT? What if you include a Fibonacci function, instead of using a fixture which you had injected? I would call that functional testing, but the world disagrees with me.
What if you include time() or rand()? Or what if you call http://google.com? I would call that system testing, but again, I am alone.
Why does this matter? Because system-tests are unreliable. They are necessary, but they will sometimes fail for reasons beyond your control. On the other hand, functional tests should always pass, not fail randomly; if they are fast, they might as well be used from the start in order to use Test-Driven Development without writing too many tests for your internal implementation. In other words, I think that unit-tests can be more trouble than they are worth, and I have good company.
I put tests on 3 axes, with all their zeroes at unit-testing:
Functional-testing: using real code deeper and deeper down your call-stack.
Integration-testing: higher and higher up your call-stack; in other words, testing your CUT by running the code which would use it.
System-testing: more and more unrepeatable operations (O/S scheduler, clock, network, etc.)
A test can easily be all 3, to varying degrees.
I would say that both are tightly linked to each other and very tough to distinguish between them.
In my view, Integration testing is a subset of functional testing.
Functionality testing is based on the initial requirements you receive. You will test the application behaviour is as expected with the requirements.
When it comes to integration testing, it is the interaction between modules. If A module sends an input, B module able to process it or not.
Integration testing - Integration testing is nothing but the testing of different modules. You have to test relationship between modules. For ex you open facebook then you see login page after entering login id and password you can see home page of facebook hence login page is one module and home page is another module. you have to check only relationship between them means when you logged in then only home page must be open not message box or anything else. There are 2 main types of integration testing TOP-DOWN approach and BOTTOM UP approach.
Functional Testing - In functional testing you have to only think about input and output. In this case you have to think like a actual user. Testing of What input you gave and what output you got is Functional testing. you have to only observe output. In functional testing you don't need to test coding of application or software.
In a Functional testing tester focuses only Functionality and sub functionality of application. Functionality of app should be working properly or not.
In integration testing tester have to check dependency between modules or sub-modules.Example for modules records should be fetching and displayed correctly in another module.
Integration Test:-
When Unit testing is done and issues are resolved to the related components then all the required components need to integrate under the one system so that it can perform an operation.
After combining the components of the system,To test that whether the system is working properly or not,this kind of testing is called as Integration Testing.
Functional Testing:-
The Testing is mainly divided into two categories as
1.Functional Testing
2.Non-Functional Testing
**Functional Testing:-
To test that whether the software is working according to the requirements of the user or not.
**Non-Functional Testing:-
To test that whether the software satisfies the quality criteria like Stress Test,Security test etc.
Usually,Customer will provide the requirements for Functional Test only and for Non Functional test,Requirements should not be mentioned but the application necessarily perform those activity.
Integration testing It can be seen as how the different modules of the system work together.
We mostly refers to the integrated functionality of the different modules, rather different components of the system.
For any system or software product to work efficiently, every component has to be in sync with each other.
Most of the time tool we used for integration testing will be chosen that we used for unit testing.
It is used in complex situations, when unit testing proves to be insufficient to test the system.
Functional Testing
It can be defined as testing the individual functionality of modules.
It refers to testing the software product at an individual level, to check its functionality.
Test cases are developed to check the software for expected and unexpected results.
This type of testing is carried out more from a user perspective. That is to say, it considers the expectation of the user for a type of input.
It is also referred as black-box testing as well as close-box testing
Checking the functionality of the application is generally known as functional testing, where as the integration testing is to check the flow of data from one module to other.
Lets take example of money transfer app.Suppose we have page in which we enter all the credentials and if we press transfer button and after that if we getting any success, Then this is functional testing. But in same example if we verify the amount transfer then it is integration testing.
Authors diverge a lot on this. I don't believe there is "the" correct interpretation for this. It really depends.
For example: most Rails developers consider unit tests as model tests, functional tests as controller tests and integration tests as those using something like Capybara to explore the application from a final user's perspective - that is, navigating through the page's generated HTML, using the DOM to check for expectations.
There is also acceptance tests, which in turn are a "live" documentation of the system (usually they use Gherkin to make it possible to write those in natural language), describing all of the application's features through multiple scenarios, which are in turn automated by a developer. Those, IMHO, could be also considered as both, functional tests and integration tests.
Once you understand the key concept behind each of those, you get to be more flexible regarding the right or wrong. So, again IMHO, a functional test could also be considered an integration test. For the integration test, depending on the kind of integration it's exercising, it may not be considerate a functional test - but you generally have some requirements in mind when you are writing an integration test, so most of the time it can be also considerate as a functional test.

Where to find good test case templates/examples? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm trying to establish more formal requirements and testing procedures then we have now, but I can't find any good reference examples of documents involved.
At the moment, after feature freeze testers "click through the application" before deployment, however there are no formal specification what needs to be tested.
First, I'm thinking about a document which specifies every feature that needs to be tested, something like this (making this up):
user registration form
country dropdown (are countries fetched from the server correctly?)
password validation (are all password rules observed, is user notified if password is too weak?)
thank-you-for-registration
...and so on. This could also serve as something client can sign as a part of requirements before programmers start coding. After the feature list is complete, I'm thinking about making this list a first column in a spreadsheet which also says when was the feature last tested, did it work, and if it didn't work how did it break. This would give me a document testers could fill after each testing cycle, so that programmers have to-do list, with information what doesn't work and when did it break.
Secondly, I'm thinking of test cases for testers, with detailed steps like:
Load user registration form.
(Feature 1.1) Check country dropdown menu.
Is country dropdown populated with countries?
Are names of countries localized?
Is the sort order correct for each language?
(Feature 1.2) Enter this passwords: "a", "bob", "password", "password123", "password123#". Only the last password should be accepted.
Press "OK".
(Feature 2) Check thank-you note.
Is the text localized to every supported language?
This would give testers specific cases and checklist what to pay attention to, with pointers to the features in the first document. This would also give me something to start automating testing process (currently we don't have much testing automation apart from unit tests).
I'm looking for some examples how others have done this, without too much paperwork. Typically, tester should be able to go through all tests in an hour or two. I'm looking for a simple way to make client agree on which features should we implement for the next version, and for testers to verify that all new features are implemented and all existing features are working, and report it to programmers.
This is mostly internal testing material, which should be a couple of Word/Excel documents. I'm trying to keep one testing/bugfixing cycle under two days. I'm tracking programming time, implementation of new features and customer tickets in other ways (JIRA), this would basically be testing documentation. This is lifecycle I had in mind:
PM makes list of features. Customer signs it. (Document 1 is created.)
Test cases are created. (Document 2.)
Programmers implement features.
Testers test features according to test cases. (And report bugs through Document 1.)
Programmers fix bugs.
GOTO 4 until all bugs are fixed.
End of internal testing; product is shown to customer.
Does anyone have pointers to where some sample documents with test cases can be found? Also, all tips regarding the process I outlined above are welcome. :)
ive developed two documents i use.
one is for your more 'standard websites' (e.g. business web presence):
http://pm4web.blogspot.com/2008/07/quality-test-plan.html
the other one i use for web-based applications:
http://pm4web.blogspot.com/2008/07/writing-system-test-plan.html
hope that helps.
First, I think combining the requirements document with the test case document makes the most sense since much of the information is the same for both and having the requirements in front of the testers and the test cases in front of the users and developers reinforces the requirement and provides varying view points of them. Here's a good starting point for the document layout: http://www.volere.co.uk/template.htm#anchor326763 - if you add: steps to test, resulting expectations of the test, edge/bound cases - you should have a pretty solid requirement spec and testing spec in one.
For the steps, don't forget to include an evaluate step, where you, the testers, developers, etc. evaluate the testing results and update the requirement/test doc for the next round (you will often run into things that you could not have thought of and should add into the spec...both from a requirements perspective and testing one).
I also highly recommend using mindmapping/work-breakdown-structure to ensure you have all of the requirements properly captured.
David Peterson's Concordion web-site has a very good page on technique for writing good specifications (as well as a framework for executing said specifications). His advice is simple and concise.
As well you may want to check out Dan North's classic blog post on Behavior-DrivenDevelopment (BDD). Very helpful!
You absolutely need a detailed specification before starting work; otherwise your developers don't know what to write or when they have finished. Joel Spolsky has written a good essay on this topic, with examples. Don't expect the spec to remain unchanged during development though: build revisions into the plan.
meade, above, has recommended combining the spec with the tests. This is known as Test Driven Development and is a very good idea. It pins things down in a way that natural language often doesn't, and cuts down the amount of work.
You also need to think about unit tests and automation. This is a big time saver and quality booster. The GUI level tests may be difficult to automate, but you should make the GUI layer as thin as possible, and have automated tests for the functions underneath. This is a huge time saver later in development because you can test the whole application thoroughly as often as you like. Manual tests are expensive and slow, so there is a strong temptation to cut corners: "we only changed the Foo module, so we only need to repeat tests 7, 8 and 9". Then the customer phones up complaining that something in the Bar module is broken, and it turns out that Foo has an obscure side effect on Bar that the developers missed. Automated tests would catch this because automated tests are cheap to run. See here for a true story about such a bug.
If your application is big enough to need it then specify modules using TDD, and turn those module tests into automated tests.
An hour to run through all the manual tests sounds a bit optimistic, unless its a very simple application. Don't forget you have to test all the error cases as well as the main path.
Go through old bug reports and build up your test cases from them. You can test for specific old bugs and also make more generalizations. Since the same sorts of bugs tend to crop up over and over again this will give you a test suite that's more about catching real bugs and less about the impossible (or very expensive) task of full coverage.
Make use of GUI and web automation. Selenium, for example. A lot can be automated, much more than you think. Your user registration scenario, for example, is easily automated. Even if they must be checked by a human, for example cross browser testing to make sure things look right, the test can be recorded and replayed later while the QA engineer watches. Developers can even record the steps to reproduce hard to automate bugs and pass that on to QA rather than taking the time consuming, and often flawed, task of writing down instructions. Save them as part of the project. Give them good descriptions as to the intent of the test. Link them to a ticket. Should the GUI change so the test doesn't work any more, and it will happen, you can rewrite the test to cover its intention.
I will amplify what Paul Johnson said about making the GUI layer as thin as possible. Separate form (the GUI or HTML or formatting) from functionality (what it does) and automate testing the functionality. Have functions which generate the country list, test that thoroughly. Then a function which uses that to generate HTML or AJAX or whatever, and you only have to test that it looks about right because the function doing the actual work is well tested. User login. Password checks. Emails. These can all be written to work without a GUI. This will drastically cut down on the amount of slow, expensive, flawed manual testing which has to be done.