How to create test data for automating acceptance test? [closed] - selenium

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm working on a legacy banking web application which uses oracle database with lots of stored procedures.
I have to write automated acceptance test suite for this application.
Most of the acceptance tests requires a customer information to be entered in the system, which then perform some business rules and changes the customer's credit ratings.
The problem is that the information that is entered goes into the database which fires a sequence of stored procedures.
I want to know how to go about creating the test data for this application for my automation suite to run?
At this moment I have a few things in my mind:
To create a separate database sandbox to run my acceptance test, but I'm worried that because of the stored procedures, would the replication be possible?
Identify the tables and mock the dao's to return the test data which calls these tables?
As this seems to be a common scenario's for applications which need their acceptance test to be automated, I would like to know what approach is followed in projects which have similar cases.
The tech stack of web application is:
Spring 3.1, Hibernate and Java 6

You absolutely MUST create a separate database sandbox. It's the only way to be sure of the state of your application when you're testing it. The creation of this sandbox DB should be part of your build process and should be fully scripted.
Have a look here for a more in-depth guide http://thedailywtf.com/Articles/Database-Changes-Done-Right.aspx

Related

How to decide which type of testing(Manual or automation) required when we get new web application? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
How to decide which type of testing(Manual or automation) required for a project or application to test?
What are the parameters we have to consider to select which type of testing(Manual or automation) to test very new application?
It depends on :-
Size of the project- If the project is large and consist of so many functionalities then automation testing is suggested
How many times you want to test a particular feature- If the requirement is to test regularly then automation test is best
Font size and image- This can not be tested through any automation tool so to test this, one should need to do manual testing
To find bugs- If one needs to find a lot of bugs, Manual testing is suggested.
You shouldn't have to choose between automation testing and manual testing the way you're asking. The way you're asking it gives me the feeling that the product is already waiting to be tested. In this case you would need to resort to manual testing.
Ideally you would want to have both and even more of automation. Some of the questions that you need to ask are:
Is this a new project or an existing one? If it's a new project then it's easier to plan for automation from the start. You could start implementing automation tests from the start. If it's an existing project then you'll need time to set up automation + write scripts etc. Then you have to resort to manual testing initially.
Is there any existing team? If yes, then what are they doing. You need to continue the process instead of suddenly disrupting it for anyone.
How much resources (money+people) do you have? Do you have manual testing resources? Are they busy or do they have bandwidth? How many automation test resources do you have?
What kind of project is it? Who does it go to? Does it have human lives depending upon it? Does it need a legal certificate of some kind for being tested?
There's just too many questions based on how your question is currently stated. I hope that this answers your question when we consider it generally. But if you're looking for a particular answer then please consider adding more context.

Custom non-trivial test fixture -- Do we create user stories for it? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
A rather complicated library/subsystem has to be integration tested and smoke tested, and for that purpose we need to develop a non-trivial test fixture/runner.
The details are not important, but assume that the test fixture we need will be generating complicated, interacting, state-dependent input test vectors, and will be looking for complex result sequences.
The test fixture itself will require some significant development effort (though less effort than the subsystem itself). The question is:
Should this non-trivial test fixture be included in the project plan as a part of the iterations?
Should a set of user stories be created for this test fixture?
If so, how would the user stories be structured? And who would be the actors here: the test engineer running the tests, the subsystem, or the fixture itself?
If your 'non-trivial test fixture/runner' is estimated to take more than a day to be implemented, it's work that should be tracked proper and should go into your backlog.
If you think it may take a week or longer, then I'd do a prototype first.
Probably the 'non-trivial test fixture/runner' doesn't bring any business value itself. I'd assume you're addressing technical dept. Writing user stories for technical tasks/ dept feels always wrong to me. Put them as technical tasks in your backlog.
You should know your business and your actors.

Best practices for designing Gherkin based Web Test Automation framework using selenium? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
what is the Best practice for designing gherkin based UI Automation framework using selenium
Browser instance
For Feature wise steps definitions or page wise.
Exception handling
Logging functionality
Execution According to feature or Scenario using MSTest
Integration With Continuous Integration tool like jenkin.
Have you invested any time at looking what's possible so far?
Browser instance - Doesn't that depend on which browser you want selenium to automate, for example, would you want to run the same actions on different browsers to test it works on each one?
Feature wise or page wise steps - Specflow doesn't care, it treats all bindings as global so it really is a personal thing. The only issue comes where you mix bindings from different classes and expect them to share some data, but even then Specflow has some pretty neat DI like instantiation to make it easier.
Exception handling - this isn't relevant during testing. You simply want something that gets out of the way and lets you see it fail when expected.
Logging - During testing you don't care. Just pick something with a null logger.
Execution of specific tests - see ReSharper or built in runner in VS2012+, or even better ncrunch
CI integration - Since Specflow tests are just Nunit or MsTest tests then any CI system should just handle them. I'd pick TeamCity as it's probably the standard for DotNet CI

Making test cases maintainable [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How to make the test cases maintainable or generic in an agile environment , where there is frequent changes in the requirements.
This is a question that was asked to my friend in an interview.
Write tests at higher level of abstraction
Write intent revealing tests rather than tests that mimic the user clicks on UI
Use BDD frameworks like Spock, Cucumber etc.
Re-use: Identify the reusable features and re-use them. For e.g. Login feature steps can be written once and re-used across other features
Write more tests at service level than from the end-to-end
Use formal techniques to reduce the number of regression tests
Equivalence Class Partitioning
Combinatorial Testing
Boundary Values
Create a test strategy for the entire team
Move white-box testing to unit and integration tests
Clearly identify what will be automated by testers and what should be automated by developers. For e.g. most of the white box tests can be realized using unit tests. Testing Quadrants is what I use heavily.
And most importantly ditch tools from vendors like mercury and IBM.
My short answer to this is treat your test suite with the same respect you treat the rest of your code base.
Automated test are code - important code. Pay as much attention to keeping them well factored and clean as you do everything else and you can't go far wrong.

How to diagram automated testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a large legacy .NET application that has evolved and grown over the years to include many components and moving parts. I want to develop a strategy for developing automated unit and integration tests for this application and to that end I think a graphical representation would be key.
What I am picturing is some sort of diagram I could use to guide the process of writing up the test cases, help achieve better coverage, and eventually refer back to once a specific test fails. Does anyone have any thoughts on what type of diagram could fulfill this goal? My guess is this would be a variant of the classic functional block diagram, but I have not found examples that specifically relate to the design of an automated testing strategy.
Could this be what you are looking for?
The UTP provides extensions to UML to
support the design, visualization,
specification, analysis, construction,
and documentation of the artifacts
involved in testing. It is
independent of implementation
languages and technologies, and can be
applied in a variety of domains of
development.
UML Testing Profile: http://utp.omg.org/