"Path bounds exceeded" while doing a Pex unit test - pex

I am trying to generate a Pex unit test on an example program. I created a webpage with Default.aspx. Now I opened the default.aspx.cs file and added the below given code to the code behind:
public int Add(int n1,int n2)
{
return n1+n2;
}
public int Subtract(int n1,int n2)
{
return n1-n2;
}
Now I right clicked inside the default.aspx.cs file, clicked "Run Pex". In the Pex exploration results I am getting a message "Path bounds exceeded".
I don't understand – what does this mean and what should I do to get my unit test to pass?

You need to create a Test project first, then mark the methods with [PexMethod] etc.
Please follow this guide from Microsoft on testing ASP.NET with Pex and Moles.

Related

GoogleTest: is there a generic way to add a function call prior to each test case?

my scenario: I have an existing unit test framework with ~3000 individual test cases. They are made from TEST, TEST_F and TEST_P macros.
Internally the tested modules make use of a logger library and now my goal is to create individual log files for each test case. To do so I would like to call a function as a SetUp for each test case.
Is there a way to register such function at the framework and get it called automatically?
The obvious solution for me would look like: do the work in a test fixture constructor or SetUp() but then I'd have to touch every single test case.
I do like the idea of registering a global setup at the framework with AddGlobalTestEnvironment() but as I understand this is handled only once per executable.
By the way: I have acceptance tests implemented in robot test and guess what? I want to repeat the task there...
Thanks for any inspiration!
Christoph
You mentioned:
The obvious solution for me would look like: do the work in a test fixture constructor or SetUp() but then I'd have to touch every single test case.
If the reason that you think you would need to touch every single test case is to set the filename differently, you can use the combination of SetUp() function and the current_test_info provided by GTest to get the test name for each test, and then use that to create a separate file for each test.
Here is an example:
// Class for test fixture
class MyTestFixture : public ::testing::Test {
protected:
void SetUp() override {
test_name_ = std::string(
::testing::UnitTest::GetInstance()->current_test_info()->name());
std::cout << "test_name_: " << test_name_ << std::endl;
// CreateYourLogFileUsingTestName(test_name_);
}
std::string test_name_;
};
TEST_F(MyTestFixture, Test1) {
EXPECT_EQ(this->test_name_, std::string("Test1"));
}
TEST_F(MyTestFixture, Test2) {
EXPECT_EQ(this->test_name_, std::string("Test2"));
}
Live example here: https://godbolt.org/z/YjzEG3G77
The solution I found in the gtest docs:
class TraceHandler : public testing::EmptyTestEventListener
{
// Called before a test starts.
void OnTestStart( const testing::TestInfo& test_info ) override
{
// set the logfilename here
}
// Called after a test ends.
void OnTestEnd( const testing::TestInfo& test_info ) override
{
// close the log here
}
};
int main( int argc, char** argv )
{
testing::InitGoogleTest( &argc, argv );
testing::TestEventListeners& listeners =
testing::UnitTest::GetInstance()->listeners();
// Adds a listener to the end. googletest takes the ownership.
listeners.Append(new TraceHandler);
return RUN_ALL_TESTS();
}
This way it automatically applies to all tests linked to this main-function.
Maybe I have to mention: my logger is a collection of static functions that send udp-packets to a receiver that cares for actual logging. I can control the filename by one of that functions. That's the reason why I don't need to insert code in every single TEST, TEST_F or TEST_P.

how to test void methods at junit in order to increase code coverage

I want to increase code coverage in junit testing . I have too many methods and all these methods are nearly in the type of "void". Therefore, I could not make assertian or any other junit tests in the void methods. I heard in the forums to test a variable in the void method to increase code coverage. But still it does not work ? What do i need to do to increase code coverage at Junit ?
For instance, in the following code I want to test transform method, not createAngularMethod. How I can do that ?
Regards
Alper
public void transform(File fileToTransform, LogGenerator logGenerator) {
createAngularLines(menu, transform);
}
private StringBuilder createAngularLines(Menu menu, Transform transform) {
StringBuilder angularLines = createInitialAngularLines(transform);
createBodyOfTheAngularPage(angularLines, transform);
createNextMethod(angularLines,transform);
createAngularLines(menu);
return angularLines;
}

Programmatically execute Gatling tests

I want to use something like Cucumber JVM to drive performance tests written for Gatling.
Ideally the Cucumber features would somehow build a scenario dynamically - probably reusing predefined chain objects similar to the method described in the "Advanced Tutorial", e.g.
val scn = scenario("Scenario Name").exec(Search.search("foo"), Browse.browse, Edit.edit("foo", "bar")
I've looked at how the Maven plugin executes the scripts, and I've also seen mention of using an App trait but I can't find any documentation for the later and it strikes me that somebody else will have wanted to do this before...
Can anybody point (a Gatling noob) in the direction of some documentation or example code of how to achieve this?
EDIT 20150515
So to explain a little more:
I have created a trait which is intended to build up a sequence of, I think, ChainBuilders that are triggered by Cucumber steps:
trait GatlingDsl extends ScalaDsl with EN {
private val gatlingActions = new ArrayBuffer[GatlingBehaviour]
def withGatling(action: GatlingBehaviour): Unit = {
gatlingActions += action
}
}
A GatlingBehaviour would look something like:
object Google {
class Home extends GatlingBehaviour {
def execute: ChainBuilder =
exec(http("Google Home")
.get("/")
)
}
class Search extends GatlingBehaviour {...}
class FindResult extends GatlingBehaviour {...}
}
And inside the StepDef class:
class GoogleStepDefinitions extends GatlingDsl {
Given( """^the Google search page is displayed$""") { () =>
println("Loading www.google.com")
withGatling(Home())
}
When( """^I search for the term "(.*)"$""") { (searchTerm: String) =>
println("Searching for '" + searchTerm + "'...")
withGatling(Search(searchTerm))
}
Then( """^"(.*)" appears in the search results$""") { (expectedResult: String) =>
println("Found " + expectedResult)
withGatling(FindResult(expectedResult))
}
}
The idea being that I can then execute the whole sequence of actions via something like:
val scn = Scenario(cucumberScenario).exec(gatlingActions)
setup(scn.inject(atOnceUsers(1)).protocols(httpConf))
and then check the reports or catch an exception if the test fails, e.g. response time too long.
It seems that no matter how I use the 'exec' method it tries to instantly execute it there and then, not waiting for the scenario.
Also I don't know if this is the best approach to take, we'd like to build some reusable blocks for our Gatling tests that can be constructed via Cucumber's Given/When/Then style. Is there a better or already existing approach?
Sadly, it's not currently feasible to have Gatling directly start a Simulation instance.
Not that's it's not technically feasible, but you're just the first person to try to do this.
Currently, Gatling is usually in charge of compiling and can only be passed the name of the class to load, not an instance itself.
You can maybe start by forking io.gatling.app.Gatling and io.gatling.core.runner.Runner, and then provide a PR to support this new behavior. The former is the main entry point, and the latter the one can instanciate and run the simulation.
I recently ran into a similar situation, and did not want to fork gatling. And while this solved my immediate problem, it only partially solves what you are trying to do, but hopefully someone else will find this useful.
There is an alternative. Gatling is written in Java and Scala so you can call Gatling.main directly and pass it the arguments you need to run the Gatling Simulation you want. The problem is, the main explicitly calls System.exit so you have to also use a custom security manager to prevent it from actually exiting.
You need to know two things:
the class (with the full package) of the Simulation you want to run
example: com.package.your.Simulation1
the path where the binaries are compiled.
The code to run a Simulation:
protected void fire(String gatlingGun, String binaries){
SecurityManager sm = System.getSecurityManager();
System.setSecurityManager(new GatlingSecurityManager());
String[] args = {"--simulation", gatlingGun,
"--results-folder", "gatling-results",
"--binaries-folder", binaries};
try {
io.gatling.app.Gatling.main(args);
}catch(SecurityException se){
LOG.debug("gatling test finished.");
}
System.setSecurityManager(sm);
}
The simple security manager i used:
public class GatlingSecurityManager extends SecurityManager {
#Override
public void checkExit(int status){
throw new SecurityException("Tried to exit.");
}
#Override
public void checkPermission(Permission perm) {
return;
}
}
The problem is then getting the information you want out of the simulation after it has been run.

Creating a JUnit graph from test results

Does JUnit have an OOB tool to plot the test results of a suite? Specifically, I am using the Selenium 2 webdriver, and I want to plot passed vs failed tests. Secondly, I want to have my tests suite continue even with a failed test, how would I go about doing this? I tried researching the topic, but none of the answers fully addresses my question.
Thanks in advance!
Should probably put my code in here as well:
#Test
public void test_Suite() throws Exception {
driver.get("www.my-target-URL.com");
test_1();
test_2();
}
#Test
public void test_1() throws Exception {
//perform test
assertTrue(myquery);
}
#Test
public void test_2() throws Exception {
//perform test
assertTrue(myquery);
}
If you using Jenkins as your CI server, you got Junit Plugin that will allow you to publish the results in the end of the test. And you got Junit Graph to display them.

How do I sense if my unit test is a member of an ordered test and, if it is, which position in that ordered test it is at?

Environment:
I have a program - named CSIS - which I need to run a lot of automated tests on in Visual Studio 2010 using C#. I have a series of functions which need to be run in many different orders but which all start and end at the same 'home screen' of CSIS. The tests will either be run on their own as a single CodedUITest (.cs filetype) or as an ordered test (.orderedtest filetype).
Goal:
The goal is to open to the CSIS homepage once no matter which of the unit tests is run first and then, after all CodedUITests are finished, no matter which unit test is last, the automated test will close CSIS. I don't want to create a separate unit test to open CSIS to the homepage and another to close CSIS as this is very inconvenient for testers to use.
Current Solution Development:
UPDATE: The new big question is how do I get '[ClassInitialize]' to work?
Additional Thoughts:
UPDATE: I now just need ClassInitialize to execute code at the beginning and ClassCleanUp to execute code at the end of a test set.
If you would like the actual code let me know.
Research Update:
Because of Izcd's answer I was able to more accurately research the answer to my own question. I've found an answer online to my problem.
Unfortunately, I don't understand how to implement it in my code. I pasted the code as shown below in the 'Code' section of this question and the test runs fine except that it executes the OpenWindow() and CloseWindow() functions after each test instead of around the whole test set. So ultimately the code does nothing new. How do I fix this?
static private UIMap sharedTest = new UIMap();
[ClassInitialize]
static public void ClassInit(TestContext context)
{
Playback.Initialize();
try
{
sharedTest.OpenCustomerKeeper();
}
finally
{
Playback.Cleanup();
}
}
=====================================================================================
Code
namespace CSIS_TEST
{
//a ton of 'using' statements are here
public partial class UIMap
{
#region Class Initializization and Cleanup
static private UIMap sharedTest = new UIMap();
[ClassInitialize]
static public void ClassInit(TestContext context)
{
Playback.Initialize();
try
{
sharedTest.OpenWindow();
}
finally
{
Playback.Cleanup();
}
}
[ClassCleanup]
static public void ClassCleanup()
{
Playback.Initialize();
try
{
sharedTest.CloseWindow();
}
finally
{
Playback.Cleanup();
}
}
#endregion
Microsoft's unit testing framework includes ClassInitialise and ClassCleanUp attributes which can be used to indicate methods that execute functionality before and after a test run.
( http://msdn.microsoft.com/en-us/library/ms182517.aspx )
Rather than try and make the unit tests aware of their position, I would suggest it might be better to embed the opening and closing logic of the home screen within the aforementioned ClassInitialise and ClassCleanUp marked methods.
I figured out the answer after a very long process of asking questions on StackOverflow, Googling, and just screwing around with the code.
The answer is to use AssemblyInitialize and AssemblyCleanup and to write the code for them inside the DatabaseSetup.cs file which should be auto-generated in your project. You should find that there already is a AssemblyInitialize function in here but it is very basic and there is no AssemblyCleanup after it. All you need to do is create a static copy of your UIMap and use it inside the AssemblyInitialize to run your OpenWindow() code.
Copy the format of the AssemblyInitialize function to create an AssemblyCleanup function and add your CloseWindow() function.
Make sure your Open/CloseWindow functions only contains basic code (such as Process.Start/Kill) as any complex variables such as forms have been cleaned up already and won't work.
Here is the code in my DatabaseSetup.cs:
using System.Data;
using System.Data.Common;
using System.Configuration;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Microsoft.Data.Schema.UnitTesting;
using System.Windows.Input;
using Keyboard = Microsoft.VisualStudio.TestTools.UITesting.Keyboard;
using Mouse = Microsoft.VisualStudio.TestTools.UITesting.Mouse;
using MouseButtons = System.Windows.Forms.MouseButtons;
namespace CSIS_TEST
{
[TestClass()]
public class DatabaseSetup
{
static private UIMap uIMap = new UIMap();
static int count = 0;
[AssemblyInitialize()]
public static void InitializeAssembly(TestContext ctx)
{
DatabaseTestClass.TestService.DeployDatabaseProject();
DatabaseTestClass.TestService.GenerateData();
if(count < 1)
uIMap.OpenWindow();
count++;
}
[AssemblyCleanup()]
public static void InitializeAssembly()
{
uIMap.CloseWindow();
}
}
}