How to verify lots of events in a reasonable way - testing

I am new to software testing. Currently I need to test a middle-sized web application. We have just refactored our codebase and added many event logging logic to the existing code. The event logging code will write to both Windows Eventlog and a SQL database table as well.
The amount of the events is about 200. What approach should I take to test/verify this code refactoring effectivly and efficiently?
Thanks.

I would be tempted to implement unit tests for each of the events to make sure when an event occurs the correct information is passed into your event logging logic.
This would mean that you can trigger one event on the deployed site and verify the data is written to the database and event log. You can have an acceptable level of confidence that the remaining event will be recorded correctly.
If unit testing isn't an option then you will need to verify each event manually, I would alternate between checking the database and the event log as there should be little risk in this area failing. That would mean you would have 200 tests rather than 400 tests.
You could also partition the application into sensible sections and trigger a few events for each section to give you a reasonable level of confidence in the application.
The approach you take will really be determined by how long you have to test, what the cost of would be if an event didn't get logged, and how well developed the logging logic is.
Hope this helps

I would have added tests before you did the refactoring. you dont know where you have broken it already :).
you are saying that it logs into EventViewer and DB, I hope you have exposed logging feature as an interface so that you can:
Extend it to log to some other device if needed
Also makes mocking bit a lot easier
if you have 200 events to test, that's not going to be easy tbh. I dont think you can escape from creating eq number of tests for your 200 events.
I would do it this way:
i would search for all places where my logging interface is used and note all classes and
start with critical paths/ones first (that way you at least cover critical ones)
or you could start from the end, i.e. note down all possible combinations of logs you are getting, maybe point to stale data so that you know if the input is the same, output should be the same too. And every time, regression test your new binaries agaisnt this data and you should get similar number/level of logs.

This shouldn't be to difficult.
Pick a free automated web test tool like Watir (java) or WatiN (.net), (or VS UI Test if you have it.)
Create tests that cover the areas of the web application you expect/need to fire events. Examine the SQL Db after each test to see what events did fire.
If those event streams are correct for the test add a step into the test to verifiy that exactly that event stream was created in the Db.
This will give you a set of tests that will validate the eventing from any portion of your web site in a repeatable fashion.
The efficent & efective part of this approach is that it allows you to create only as many tests as you need to verify the app. Also you do not need to recreate a unit test approach with one test per event.
Automating the tests will allow you re-execute them without additonal effort, and this will really add up over the long haul.
This approach can also be taken with manual testing, but it will be tricky to get consistent & repeatable results. Also re-testing will take nearly as long as the testing uncovers defects that need to be fixed.
Note: while this will be the most effective & efficent way it will not be exhaustive. There will likely be edge case that get missed, but that can be said of nearly any test approach. Just add test cases until you get the coverage you need.
Hope this helps,
Chris

Related

Workflow from development to testing and merge

I am trying to formalize the development workflow and here's the first draft. Welcome suggestions on the process and any tweaks for optimization. I am pretty new when it comes to setting up the processes and would be great to have feedback on it. P.S: We are working on an AWS Serverless application.
Create an issue link in JIRA - is tested by. The link 'is tested by' has no relevance apart from correcting displaying the relation while viewing the story.
Create a new issue type in JIRA - Testcase. This issue type should have some custom fields to fully describe the test case.
For every user story, there will be a set of test cases that are linked to the user story using the Jira linking function. The test cases will be defined by the QA.
The integration/e2e test cases will be written in the same branch as the developer. E2E test cases will be written in a separate branch as it's a separate repository (Open for discussion).
The Test case issue type should also be associated with a workflow that moves from states New => Under Testing => Success/Failure
Additionally, we could consider adding capability in the CI system to automatically move the Test case to Success when the test case passes in the CI. (This should be possible by using JIRA API ). This is completely optional and we will most probably be doing it manually.
When all the Test cases related to a user story to success, The user story can then be moved to Done.
A few points to note:
We will also be using https://marketplace.atlassian.com/apps/1222843/aio-tests-test-management-for-jira for test management and linking.
The QA should be working on the feature branch from day 1 for adding the test cases. Working in the same branch will enable the QA and developer to be always in Sync. This should ensure that the developer is not blocked waiting for the test cases to be completed for the branch to be merged into development.
The feature branch will be reviewed when the pull request is created by the developer. This is to ensure that the review is not pending until the test cases have been developed/passed. This should help with quick feedback.
The focus here is on the "feature-oriented QA" process to ensure the develop branch is always release-ready and that only well-tested code is merged into the develop branch.
A couple of suggestions:
For your final status consider using Closed rather than Success/Failure. Success/Failures are outcomes rather than states. You may have other outcomes like cancelled or duplicate. You can use the Resolved field for the outcomes. You could also create a custom field for Success/Failure and decouple it from both the outcome and status. You ideally do not want your issue jumping back in forth in your workflow. If Failure is a status then you set yourself up for a lot of back and forth
You may also want to consider a status after New Test Creation for the writing of the test case and a status after that such as Ready for Testing. This would allow you to more specifically where the work is in the flow and also capture the amount of time that is spent writing tests, how long test cases wait, and how much time is spent actually executing tests and defect remediation
Consider adding a verification rule to your Story workflow that prevents a story from being closed until all the linked test cases are closed
AIO Tests for Jira, unlike other test management systems, does not clutter Jira, by creating tests as issues. So you need not create an issue type at all.
With it's zero setup time, you can simply start creating tests against stories. It has a workflow from Draft to Published (essentially equaling Ready for Testing).
The AIO Tests jira panel shows the cases associated with the stories and their last execution status to get a glimpse of the testing progress of the story as shown below. It allows everyone from the Product to the Developer to get a glimpse of the testing status.
You can also create testing tasks and get a glimpse of the entire execution cycle in the AIO Tests panel.
It also has a Jenkins plugin + REST APIs to make it part of your CI/CD process.

Automating Sequence of Manual Steps

I have sequence of steps that an user does, e.g. logging on the a remote UNIX shell, creation of files/directories, changing permission, Running remote Shell scripts and commands, File deletion, File movements,
Run DB queries and basis the query results perform certain tasks exporting the results to a file or run further shell commands/scripts or DB insert statements etc etc.
doing there steps users achieves different processed or data processing and validating.
What is the best way to automate the above schenerio, Should we go for a Workflow tools like Activiti etc. or is there a better framework/way to achieve the requirements.
My requirement is to work with Open-source, and possibly Java based.
I am completely new to this so any help pointers would be appreciated.
The scenario you describe is certainly possible with a workflow tool like Activiti. Apache Camel or Spring Integration would be another possibility (as all the steps you mention are automatic system tasks).
A workflow framework would be a good option if you need one of these
you want to store the history data for 'audit purposes': who did what/when/how long did it take.
you want to visually model your steps, perhaps to discuss it with business people.
there is a need for human interaction between some of the steps
Your description reminds me of a software/account provisioning process.
There are a large number of provisioning tools on the market both Open Source or otherwise (Dell Crowbar is one options).
However, A couple of the comments you made in your response to Joram indicate a more general purpose tool such as Activiti may be an option:
"Swivel Chair" tasks - User tasks that may one day be automated
Visual model of process state
Most provisioning tools dont allow for generic user tasks and dont provide a (good) visual model of the process state.
However, they generally include remote script execution which would need to be cobbled together as a service task if using a BOM tool.
I would certainly expand my research to include provisioning tools as they sound like a better fit, however if you cant find anything that works for you, a BPM platform provides a generic framework to build what you need.

How to do load testing in xpages

I am facing issue with my xpage application. It works perfectly fine with less number of concurrent users. But When more concurrent users say more then 1000 , try to access xpage application, It becomes very slow. I have looked the code and corrected some redundant code .
But I am not sure this is the issue. For that is there any way in lotus notes to simulate the load testing with 1000 users?
Please help me if any workaround there.
Agree with Oliver about using JMeter.
But then what you really want is to find out where you have "expensive" code. For an agent you can just "profile" it. However, that is a little less straight-forward for an XPage. You can try the XPages Toolbox from OpenNTF.org. I have not tried it on Domino 9.0.x but I would think you could use it.
Another simple (and quick) way to get an idea is to print some time info on the console of the server when you load the pages in your application. You can use a phase listener to add this information - or put it in another more specific location - it really depends on the way that your application is structured. But this way you can get a very quick idea of where the bottlenecks are before you dive into something like the toolbox :-)
/John
We used JMeter to get an idea what will happen if X users will access our app in Y threads etc. http://jmeter.apache.org/

Test data management

I am new to automation testing and started working on Selenium webdriver and Nunit framework.
I have some queries related to test data management, and am looking for the best approach.
I have to design some test cases where a user registers for an event, but can only register once. If I want to run the test multiple times or run the test on multiple browsers in parallel, what would be the best approach?
I need to search for an event and perform some actions on these. These events would not be available if I run the test case after a few days.
You can clear the logical flag that makes the users registered and then re-use them. Just avoid re-using users across more than one browser.
If you are using automation and don't need to explicitly test the negative conditions of failing to re-register, then you build the registration clearing into the script.

How to do concurrent modification testing for grails application

I'd like to run tests that simulate users modifying certain data at the same time for a grails application.
Are there any plug-ins / tools / mechanisms I can use to do this efficiently? They don't have to be grails specific. It should be possible to fire multiple actions in parallel.
I'd prefer to run the tests on functional level (so far I'm using Selenium for other tests) to see the results from the user perspective. Of course this can be done in addition to integration testing if you'd recommend to run concurrent modification tests on integration level as well.
I have used Geb (http://grails.org/plugin/geb/) for this recently. It is a layer on top of WebDriver and Selenium etc.. Its very easy to write a Grails script to act as a user in your app and then just run several instances on different consoles. Geb uses a jQuery style syntax for locating stuff in the DOM which is very cool:
import geb.Browser
import geb.Configuration
includeTargets << grailsScript("_GrailsInit")
target(main: "Do stuff as fast as possible") {
Configuration cfg = new Configuration(baseUrl: "http://localhost:8080/your_app/")
Browser.drive(cfg) {
go "user/login"
$("#login form").with {
email = "someone#somewhere.com"
password = "secret"
_action_Login().click()
}
...
}
}
setDefaultTarget(main)
Just put your script in scripts/YourScript.groovy and then you can do "Grails YourScript" to run it. I tracked down some concurrency issues by just running several of these at full speed. You do need to build a war and deploy it properly as Grails in dev mode is very slow and runs out of permgen space quite quickly.
Just an idea: it seems difficult to make client starts at the same time, but can they wait for each other just before modifying data?
Such as, a client keeps logging its process: "Client x access DATA", "Client x editing DATA" in a file. They also keep looking this log file, to see other clients' progress. Then don't permit a client complete editing a DATA until another client comes in to edit that DATA.
I've found Grinder to be an excellent tool for heavy load testing. Running multiple instances performing the same tests at one time can often uncover concurrency issues in your app that you wouldn't find with normal tests.
If you want to do this within Unit Tests or in-code Integration Tests, you could always spin up multiple threads in code and have them perform the task you're trying to test.
Are you primarily interested in load testing multiple active users, as opposed to those who just have a HttpSession? Solid load testing is predicated on really really good func. testing however. How are your functional tests organized and executed to-day? Grails has a plug-in* for that, too, and it appears to be in the Top of the Pops at the plug-in portal.
Are you attempting to test out how the optimistic locking mechanism performs under load?
If the former use case is the one that means more, it sounds like you may be looking for JUnitPerf. Here is the --> download
*functional-test <1.2.7> -- Functional Testing
WebTest is built on Ant which provides the parallel task. You might be able to use this in conjuction with the Webtest plugin to run some actions in parallel. I've never tried it though.
Have a look at MultithreadedTC. It looks like it could be used to exercise certain interleaving cases where multiple threads are executing your code in ways you consider potentially risky.
I doubt you'll find a convenient way to test specific multithreaded interleaving cases with Selenium because Selenium controls a browser which sends requests to your server. I haven't heard of a way to instrument code for multithreaded interleaving tests when the threads are started as real web requests to a running web server.