How detailed should my manual test case be? [closed] - testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm testing a business app and my boss insists that my test cases are too detailed and bring no value to the company. For UI and functionality testing I was just testing every text box, menu, etc., and making a proper test case in MTM.
How much detail should I include in test cases? How detailed should they be?

It's hard to suggest something without seeing what you've got and what's being criticized.
Just a tiny idea on how to make your test-cases more general: try to make use of some kind of repositories.
These can be UserRepository (having GoodUser, BadUser; GoodUser.Admin, GoodUser.Customer, etc).
Such strategies are applied in automation testing.
This way, instead of having
1. Enter "Login1" into 'login field';
2. Enter "Password1" into 'password field';
3. Press 'Sign in' button...
You'll simply have
1. Sign in as GoodUser.Customer;
And if you have some other field added to the login process later, you won't bother editing dozens of test-cases.
Good luck!

You have to write test cases in proper formation. If application is big and you are testing first time then you have to create test cases for every fields and every menus.
If you have knowledge about the application functionalities and you have also worked on it then create test cases for functionalities and include validations.
I hope this is helpful for you.

It is very easy just follow my points:
1) Just understand that app.
2) Just understand that app functionality.
3) Note important points in that app.or note that app full functionality line by line in a paper.
4) Now Start write test cases by the functionality.
5) Your are testing business app so it is related to server side also.
6) Write some test cases server side also.
7) Write test cases clearly and step by step.Because in testing every point is important to the developers.
I hope this is helpful for you.

If you are new tester then try to understand the application first then ask for requirements and according to your requirement you can write test cases.
100% testing for any application is not possible,try to complete all functionality as well as write Bug sheet,which helps to improve application UI and functionality.
Don't add redundant test cases for menus,text-box,buttons etc.
I hope this is helpful for you.

In an application you can write a single test case for an action and records your results
EX:
For login page you can write a single case with and provide different input while doing your validation for all fields and record the results based on your requirement.

Related

Testing streaming service (like selenium) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Good afternoon.
I'm testing my company's streaming service, which works like twitch.
The task is as follows:
Log in to your account and simulate viewing the stream ( and chat)
I was thinking of writing code in selenium. But as far as I understand, in this case you will have to use your own driver for each thread. I'm afraid it will take up too much memory.
Now the question.
It's true? Is there a way to avoid this?
What methods would you recommend to solve this problem?
I just came up with the idea to try not drawing videos to save resources. But there is one caveat here, so that the streaming service doesn't think I'm a bot.
In other words, I have to constantly get it, but not draw it.
This won't work with selenium.
The question is as follows: is it possible
to send login data to the form and "view" the stream programmatically in Java?
Which libraries should I use?
Can you recommend the necessary libraries with links to the functionality I need?
You can use a service for cloud-hosted testing for this, you will not have to care about the testing infrastructure then. Some services allow you to use Selenium in the test scripts, so test creation will be similar to a local testing experience.
Here is a link to a service that will allow you to achieve what you need and you can run some tests for free there.
Also here is a step-by-step guide to create and set your test.
The easiest way to achieve this would be to use Selenium Grid with TestNg.
As long as you need to validate the front end, selenium is the tool, if not 100%, you can simple test using API calls:
Log in via API calls
Perform a get on desired page and use a html parser to make some validations regarding the front end call
API calls to check the chat

How should we automate system testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
We are building a large CRM system based on the SalesForce.com cloud. I am trying to put together a test plan for the system but I am unsure how to create system-wide tests. I want to use some behaviour-driven testing techniques for this, but I am not sure how I should apply them to the platform.
For the custom parts we will build in the system I plan to approach this with either Cucumber of SpecFlow driving Selenium actions on the UI. But for the SalesForce UI Customisations, I am not sure how deep to go in testing. Customisations such as Workflows and Validation Rules can encapsulate a lot of complex logic that I feel should be tested.
Writing Selenium tests for this out-of-box functionality in SalesForce seems overly burdensome for the value. Can you share your experiences on System testing with the SalesForce.com platform and how should we approach this?
That is the problem with detailed test plan up front. You trying to guess what kind of errors, how many, and in what areas you will get. This may be tricky.
Maybe you should have overall Master Test Plan specifying only test strategy, main tool set, risks, relative amount of how much testing you want to put in given areas (based on risk).
Then when you starting to work on given functionality or iteration (I hope you are doing this in iterations not waterfall), you prepare detailed test plan for this set of work. You adjust your tools/estimates/test coverage based on experiences from previous parts.
This way you can say at the beginning what is your general approach and priorities, but you let yourself adapt later as project progresses.
Question about how much testing you need to put into testing COTS is the same as with any software: you need to evaluate the risk.
If your software need to be
Validated because of external
regulations (FDA,DoD..)
you will need to go deep with your
tests, almost test entire app. One
problem here may be ensuring
external regulator, that tools you
used for validation are validated
(and that is a troublesome).
If your application is
mission-critical for your company,
than you still need to do a lot of
testing based on extensive risk
analysis.
If your application is not concerned
with all above, you can go with
lighter testing. Probably you can
skip functionality that was tested
by platform manufacturer, and focus
on your customisations. On the other
hand I would still write tests (at
least happy paths) for
workflows you will be using in your
business processes.
When we started learning Selenium testing in 2008 we created Recruiting application from SalesForce handbook and created a suite of tests and described our path step by step in our blog. It may help you get started if you decide to write Selenium code to test your app.
I believe the problem with SalesForce is you have Unit and UI testing, but no Service-level testing. The SpecFlow I've seen which drives Selenium UI is brittle and doesn't encapsulate what I'm after in engineering a service-level test solution:
( When I navigate to "/Selenium-Testing-Cookbook-Gundecha-Unmesh/dp/1849515743"
And I click the 'buy now' button
And then I click the 'proceed to checkout' button)
That is not the spirit or intent of Specflow.
Given I have not selected a product
When I select Proceed to Checkout
Then ensure I am presented with a message
In order to test that with selenium, you essentially have to translate that to clicks and typing, whereas in the .NET realm, you can instantiate objects, etc., in the middle-tier, and perform hundreds of instances and derivations against the same BACKGROUND (mock setup).
I'm told that you can expose SF through an API at some security risk. I'd love to find more about THAT.

What makes a good test procedure for functional requirements? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm the lead developer on a new project and have the opportunity to work with the system engineers to create our template for testing functional requirements. I was wondering if anyone had input on what makes a good test procedure template or had an example of a great template.
Thanks!
This isn't a very easy one to answer. It depends on a few things:
1) The definition/interpretation of what is a functional test case
2) The role of the support staff in the acceptance tests
3) The longevity of the tests
This is purely opinion based on my own experiences.
(inserts two cents into vending machine)
1) What is a functional test case? - You and the systems engineer need to align on this one. You may find (as I did) that the system engineer will tackle things at a higher (less granular) level than you. For example, assuming that a specific requirement is for the creation of a web service, the engineer would need to know:
is the interface behave correctly?
Are the input parameters in a test case meant to induce a success/failure?
On failure, are the appropriate errors/error codes returned? Note that depending on their time, an engineer may only stick to major/important failure conditions (or negative responses) that affect the product/service as a whole (for example a "host not found/timeout error" should be in the interface but does not necessarily need to be tested, but a use-case related failure such as "client has insufficient funds" is important to the engineer.
is the transaction status recorded correctly?
Again, you and the systems engineer should be clear on what is a functional test case and what is not. Usually the functional tests are derived directly from the functional spec provided to you. For certain products, retry on timeout falls under non-functional, but you may have an engineer who wants his web service to retry 17 times on a timeout before giving up - and if he specifies this - then you include it.
2) How are these tests carried out and who signs them off? Depending on this, you may need to streamline or flesh out the functional tests.
If yourself and the systems engineer will lock yourselves up into a cosy room for half a day going through each test case, then keep it streamlined: the two of you should be pretty familiar with the requirements and the engineer would have reviewed the document and have provided comment already. On the other hand, you may have the support engineers running the tests with you instead of the engineer (that's how we run it...the systems engineer reviews the test cases, stays for a bit at the beginning and leaves when he gets bored). Where was I? Right, so in this case, your document may have to go into a bit more hand-holding where you describe the scenario that is being tested. This leads me to the last point in my long winded chat...
3) Longevity of the document
So often as is the case on my side, once a set of functional tests are over and done with, they are promptly forgotten. However, these tests validate your system and your product and the support engineers should be in the position to run them whenever they'd like to :
resolve issues ("was this sort of case even tested before go-live?")
resolve issues again ("geez did these guys even test this particular scenario?")
validate system/product integrity after a major change
learn about the as-is functionality of a product or service (so many times people forget how the product is supposed to behave, and support staff hate reading requirements specs especially the requirements specs that are out of date and the current behaviour of the system differs from what was originally specced)
(deep breath)
So now you need to make sure you cover the following:
Test set up part 1: what are the requirements to run the test? What tools do i need? network connectivity?
Test set up part 2: what test-data am i going to use? where is it if I need it or how do i generate it?
Overview of the functional requirements/tests to at least impart what the expected behaviour is.
Overview of the major system components that will be tested
An idea of the limitations of the tests - certain functional tests may only be simulated or could not be tested against a live end system etc etc - you need to decribe the limitation and show the reader how you're gonna fake it.
Also, the systems engineer will expect you to have already completed your granular tests like component tests, integration tests, etc etc as well. Depending on how awesome he is, the engineer may ask for the documentation of these component tests and run a few himself.
Hope that helps somewhat - having a template provides consistent presentation and helps you ensure that all the important content is covered - but I think the focus should be on pinning the purpose and fulfilling that purpose.
Hope I made some cents :)

How do you verify that users' requirements are addressed in the code you're working on? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Code can be perfect, and also perfectly useless at the same time. Getting requirements right is as important as making sure that requirements are implemented correctly.
How do you verify that users' requirements are addressed in the code you're working on?
You show it to the users as early and as often as possible.
Chances are that what they've asked for isn't actually what they want - and the best way of discovering that is to show them what you've got, even before it's finished.
EDIT: And yes, this is also an approach to answering questions on StackOverflow :)
You write tests that assert that the behavior the user requires exists. And, as was mentioned in another answer, you get feedback from the users early and often.
even if you talk with the user, and get everything right, the user might have gotten it wrong. They won't know until they use the software that they didn't want what they asked for. the surest way is to do some sore of prototype that allows the user to "try it out" before you write the code. you could try something like paper prototyping
If possible, get your users to write your acceptance tests. This will help them think through what it means for the application to work correctly. Break the development down into small increments that build on each other. Expose these to the customer early (and often), getting them to use it, as others have said, but also have them run their acceptance tests. These should also be developed in tandem with the code under test. Passing the test won't mean that you have completely fulfilled the requirements (the tests themselves may be lacking), but it will give you and the customer some confidence that you are on the right track.
This is just one example of where heavy customer interaction pays off when developing code. The way to get the most assurance that you are developing the right code is having the customer participating in the development effort.
How do you verify that users' requirements are addressed in the code you're working on?
For a question put in this form the answer is "You can't".
The best way is to work with users from the very first days, show them prototypes and incorporate their feedback continuously.
Even so, at the end of the road, there will likely be nothing resembling what was originally discussed and agreed on.
Ask them what they want you to build before you build it.
Write that down and show them the list of requirements you have written down.
Get them to sign off on the functional design.
Build a mock up and confirm that it does what they want it to.
Show them the features as it is being implemented to confirm that they are correct.
Show them the application when it's finished and allow them to go through acceptance testing.
They still wont be happy but you will have done everything you can.
Any features that are not in the document they signed off can be considdered change requests which you can charge them extra. Get them to sign off everything you show them, to limit your liability
by using development method that often controls alignement between the implementation and the requirements.
For me, the best way is to involve a "expert customer" to validate and test in a interative way as often as possible the implementation ....
If you don't, you risk to have, as you said, a very beautiful soft perfectly useless....
you can try personas; a cohort of example users that use the system.
quantify their needs, wants, and make up scenarios of what is important to them; and what they need to get done with the software.
most importantly- make sure that the users (the persona's) goals are met.
here's a post I wrote that explains it in more detail.
You write unit tests that expect an answer that supports the requirements. If the requirement is to sum a set of numbers, you write
testSumInvoice()
{
// create invoice of 3 lines of $1, $2, $3 respectively
Invoice myInvoice = new Invoice().addLine(1).addLine(2).addLine(3);
assertTrue(myInvoice.getSum(), 6);
}
If the unit test failed, either your code is wrong or possible was changed due to some other requirement. Now you know that there is a conflict between the two cases that needs to be resolved. It could be as simple as updating the test code or as complex as going back to the customer with a newly discovered edge case that isn't covered by the requirements.
The beauty of writing unit tests is it forces you to understand what the program should do such that if you have trouble writing the unit test, you should revisit your requirements.
I don't really agree that code can be perfect...but that's outside of the real question. You need to find out from the users prior to any design or coding is done what they want - ask them 'what does success look like', 'what do you expect when the system is complete', 'how do you expect to use it'...and video tape the response, mindmap it, or wireframe it and than give review it with them to ensure you captured the most important aspects. You can than use those items to verify the iterative deliveries...expect the users to change their mind/needs over time and once they have 'it in their hand' (IKIWISI - I Know It When I See It)...and record any change requests in the same fashion.
AlbertoPL is right: "Most of the time even the users don't know what they want!"
And if they know, they have a solution in mind and specify aspects of that solution instead of just telling the problem.
And if they tell you a problem, they may have other problems without being aware that these are related by having a common cause or a common solution.
Thus, before you implement mockups and prototypes, go and watch the use of what the customer already has or what the staff is still doing by hand.

Why should I practice Test Driven Development and how should I start? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 months ago.
Improve this question
Lots of people talk about writing tests for their code before they start writing their code. This practice is generally known as Test Driven Development or TDD for short. What benefits do I gain from writing software this way? How do I get started with this practice?
There are a lot of benefits:
You get immediate feedback on if your code is working, so you can find bugs faster
By seeing the test go from red to green, you know that you have both a working regression test, and working code
You gain confidence to refactor existing code, which means you can clean up code without worrying what it might break
At the end you have a suite of regression tests that can be run during automated builds to give you greater confidence that your codebase is solid
The best way to start is to just start. There is a great book by Kent Beck all about Test Driven Development. Just start with new code, don't worry about old code... whenever you feel you need to refactor some code, write a test for the existing functionality, then refactor it and make sure the tests stay green. Also, read this great article.
The benefits part has recently been covered, as for where to start....on a small enterprisey system where there aren't too many unknowns so the risks are low. If you don't already know a testing framework (like NUnit), start by learning that. Otherwise start by writing your first test :)
Benefits
You figure out how to compartmentalize your code
You figure out exactly what you want your code to do
You know how it supposed to act and, down the road, if refactoring breaks anything
Gets you in the habit of making sure your code always knows what it is supposed to do
Getting Started
Just do it. Write a test case for what you want to do, and then write code that should pass the test. If you pass your test, great, you can move on to writing cases where your code will always fail (2+2 should not equal 5, for example).
Once all of your tests pass, write your actual business logic to do whatever you want to do.
If you are starting from scratch make sure you find a good testing suite that is easy to use. I like PHP so PHPUnit or SimpleTest work well. Almost all of the popular languages have some xUnit testing suite available to help build and automate testing.
In my opinion, the single greatest thing is that it clearly allows you to see if your code does what it is supposed to. This may seem obvious, but it is super easy to run astray of your original goals, as I have found out in the past :p