Using custom objects with Pex testing framework - pex

I am trying to use Pex and Moles testing framework for testing my project.
I have small idea of using Pex for parametrized testing.
void SampleMethod(Employee emp)
{
/// Some business logic
}
void SampleMethod(List<Employee> emps)
{
/// Some business logic
}
How to do testing for these kind of methods?
Thanks
Ashwani

Pex will generate a test for you and Moles will provide the stubs.
e.g.
[TestMethod]
[PexGeneratedBy(typeof(ProgramTest))]
public void someTest()
{
SCustomer sCustomer = new SCustomer();
int i;
i = this.DoSomething((Customer)sCustomer);
Assert.AreEqual<int>(0, i);
}
The "S" here denotes "Stub" and is a mock object of your dependent class, in your case "Employee" or "SEmployee". Moles does the stubbing based on the interface (IEmployee in your case).
You can then stub out behaviour using anonymous delegates:
customer.GetFirstName = () => "Charlie";
Does that help?

Related

Assert in Selenium C#

Is there a Assert class present in Selenium C# just like we have in Coded UI test.
Or I should use the Microsoft.VisualStudio.TestTools.UnitTesting.Assert class to perform asserts in Selenium?
Yes, you would use the Assert class in your unit test framework, in your case MSTest.
The Selenium library doesn't have responsibility over test framework type of functions, including asserts.
You can use FluentAssertions which supports many different frameworks including MSTest which might minimize changes needed if you need to switch frameworks for any reason.
The Assert class is available with either the MSTest or NUnit framework.
I used NUnit, and there is the Assert class as in below line of code.
Sample code:
Assert.AreEqual(AmericaEmail, "SupportUsa#Mail.com", "Strings are not matching");
According to https://msdn.microsoft.com/en-us/library/ms182532.aspx
[TestClass]
public class UnitTest1
{
private IWebDriver driver;
[TestInitialize]
public void Setup()
{
driver = new ChromeDriver();
driver.Url = "Your URL";
}
[TestMethod]
public void TestMethod1()
{
//Your first test method
var element = driver.FindElement(By.Id("ID"));
Assert.IsTrue(element.Displayed);
Assert.AreEqual(element.Text.ToLower(), "Expected text".ToLower());
}
[TestMethod]
public void TestMethod2()
{
//Your second test method
}
[TestCleanup]
public void TearDown()
{
driver.Quit();
}
}
You can use MSTest or I would rather prefer to write simple assertion method. In Selenium, most of the use cases would be a Boolean check, IsTrue | IsFalse (you can even extend to write more complex assertions), so when you define your own assertion you will get more control of your script, like,
Taking a screenshot if the assertion fails
You can live with the failure and continue the test
You can mark the script as partially passing
Extract more information from the UI, like JavaScript errors or server errors
To use Assert you have to first create the unit testing project in Visual Studio.
Or import the following reference to the project.
using Microsoft.VisualStudio.TestTools.UnitTesting;
// Using this you can use the Assert class.
Assert.IsTrue(bool);
Assert.IsFalse(bool);
Assert.AreEqual(string,string);
Assert.Equals(obj1, obj2); // Object comparison
Assert.AreEqual(HomeUrl, driver.Url); // Overloaded, comparison. Same object value
Assert.AreNotEqual(HomeUrl, driver.Url);
Assert.AreSame("https://www.google.com", URL); // For the same object reference
Assert.IsTrue(driver.WindowHandles.Count.Equals(2));
Assert.IsFalse(driver.WindowHandles.Count.Equals(2));
Assert.IsNull(URL); Assert.IsNotNull(URL);
Selenium C# is not offering an assert class. You have to borrow it from somewhere or write your own implementation.
Writing your own implementation will give you more freedom to manage assertions, as Peter said.
Here is the basic idea. Just create a static class having assert methods like this:
public static class MyAssertClass
{
public static void MyAreEqualMethod(string string1, string string2)
{
if (string1 != string2)
{
// Throw an exception
}
}
}
MSTest Framework : Assert Class
https://learn.microsoft.com/en-us/dotnet/api/microsoft.visualstudio.testtools.unittesting.assert?view=mstest-net-1.3.2
This way assert can be used for switching into popup's displayed:
IAlert alert = driver.SwitchTo().Alert();
String alertcontent = alert.Text;
Assert.AreEqual(alertcontent, "Do you want to save this article as draft?");
obj.waitfn(5000);
alert.Accept();

GoogleTest: Trying to get an abstract base class with tests and then use derrived classes to defined multiple test scenarios

In an attempt to do BDD style testing of some code, I have a set of tests which I want to be performed for multiple scenarios. I have done this many times in C# with NUnit & NSubstitute but I am struggling to achieve the desired result for C++ code with GoogleTest.
The concept of what I want to do - but does not even compile due to the pure virtual method in BaseTest is:
class BaseTest : public ::testing::Test {
protected:
int expected = 0;
int actual = 0;
virtual void SetUp() { printf("BaseTest SetUp()\r\n"); }
virtual void TearDown() { printf("BaseTest TearDown()\r\n"); }
virtual void PureVirtual() = 0;
};
TEST_F(BaseTest, BaseTest1)
{
printf("BaseTest BaseTest1\r\n");
ASSERT_EQ(expected, actual);
}
class ScenarioOne: public BaseTest {
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioOne SetUp()\r\n");
actual = 20;
expected = 20;
}
virtual void PureVirtual() {}
};
class ScenarioTwo: public BaseTest {
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioTwo SetUp()\r\n");
actual = 98;
expected = 98;
}
virtual void PureVirtual() {}
};
The above code is greatly simplified, the BaseTest class would have 30+ tests defined and the Scenario classes would have extensive and complicated input data to exercise the code being tested and the expected results would be sizeable and non-trival - hence the idea of in a derived class SetUp() method, defining the input data and expected results and stimulating the code under test with the input data. The tests in the base class would then test the various actual results against the expected results and pass/fail as appropriate.
I have considered trying to use parametrized tests but due to the complex nature of the input data and expected results this looks difficult, plus for each new test scenario I believe it would mean modifying each of the tests to provide the input data and expected results as an additional parameter.
As I said earlier, I can do this sort of thing easily in C# but sadly I am working on a C++ project at this time. Is what I'm trying to do possible with GoogleTest?
OK - I've just thought of a potential solution.
Put all the tests in a header file like this:
// Tests.h - Tests to be performed for all test scenarios
TEST_F(SCENARIO_NAME, test1)
{
ASSERT_EQ(expected, actual);
}
The BaseTest class would just have basic SetUp()/TearDown() methods, member variables to hold the expected and actual results plus any helper functions for the derived scenario classes - but no tests so could be abstract if wanted.
Then for each scenario:
class ScenarioOne: public BaseTest
{
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioOne SetUp()\r\n");
actual = 20;
expected = 20;
}
};
#define SCENARIO_NAME ScenarioOne
#include "Tests.h"
The resultant effect is a set of tests defined once which can then be applied to multiple test scenarios.
It does seem like a bit of a cheat so I'm interested if anyone has a better way of doing it.

Automocking with LightInject plus Nsubstitute, how?

I am new to both libraries and before committing to their usage on a large project I need clarification on my options for low-code effort automocking in my unit tests.
After spending some time on Google I have concluded that, unlike some other IOC/Mocking product pairings, a ready-made plugin library is not available for LightInject+Nsubstitute to simplify the declaration of do-nothing default mocks in the arrange stage of a unit test.
I have read the LightInject docs on how to override a LightInject container with a temporary enhanced mock object just for the scope of a unit test but what about all the do-nothing default isolation mocks that a unit test might touch. Is there a way to automate their creation within the LightInject container?
The internal IOC container behaviour I am looking for is:
public class LightInject.ServiceContainer
{
..
public T GetInstance<T)
{
if (( this.RegisteredInterfaces.Any( i => i.Itype == T ) == false )
&& ( this.TemporaryUnitTestOverrides.Any( i => i.Itype == T ) == false ))
&& ( /* this container is configured with an automocking delegate */ ))
return autoMockCreatorDelegate<T>.Invoke();
}
It seems like LightInject's IProxy and Interceptors provide some internal mock object building blocks but the Nsubstitute library is full featured in comparison.
Clarification on what I mean by default do nothing mock and an enhanced mock.
// default do nothing mock
var calculator = Substitute.For<ICalculator>();
// Enhanced mock that will return 3 for .Add(1,2)
var calculator = Substitute.For<ICalculator>();
calculator.Add(1, 2).Returns(3);
Obviously the second enhanced type of mock will need to be crafted locally per unit test.
I am the author of LightInject and would really like to help you out.
Let me look into this and get back to you. In the meanwhile you might want to check out this library at
LightInject.AutopMoq which is a third party contribution to the LightInject container. It uses Moq instead of NSubstitute, but the general concept should be similar to what you are asking for.
That being said, I did some work a while ago that simplifies automocking even further and will take a look at it it and see how that can be integrated with NSubstitute.
Edit
This is a super simple automocking implementation that works with any "substitute" framework.
using System.Diagnostics;
using LightInject;
using NSubstitute;
public interface IFoo { }
class Program
{
static void Main(string[] args)
{
var serviceContainer = new ServiceContainer();
serviceContainer.RegisterFallback((type, s) => true, request => CreateMock(request.ServiceType));
var foo = serviceContainer.GetInstance<IFoo>();
Debug.Assert(foo is IFoo);
}
private static object CreateMock(Type serviceType)
{
return Substitute.For(new Type[] { serviceType }, null);
}
}
Best regards
Bernhard Richter
Some feedback as promised in my comment to the accepted answer. I applied the suggestion from the author of LightInject with success in some simple unit tests.
After getting the basics working I decided to hide the Ioc service mocking setup code in a base class plus something I have called a MockingContext, the end result is cleaner lighter unit test code. The mocking context class also ensures that foreach Nsubstitute configured mock type passed to the Ioc service as a short term automock override, there is a matching LightInjet.Service.EndMocking( T ) call. This removes the danger that configured mocks might pollute the auto mocking assumptions of a following unit test.
In the example ClassC depends on IFooA and IFooB (no constructor injection). For the unit test below, IFooA is auto mocked by LightInject without explicit code whereas IFooB is configured via an Nsubstitute call and also passed to LightInject in the MockingContext.Add<>() method.
[TestClass]
public class UnitTest1 : AutoMocking
{
[TestMethod]
public void Test_1()
{
using (var mc = MockingContext)
{
// No need to mention IFooA here, LightInject will auto mock
// any interface not previously declared to it.
// Given
var mockB = mc.Add<IFooB>();
mockB.MethodY().Returns("Mock Value OOO");
var sut = new ClassC();
// When
var testResult = sut.MethodZ();
// Then
Assert.AreEqual(testResult, "MethodZ() received=Mock Value OOO");
}
}

Can you apply aspects in PostSharp without using attributes?

I know with Castle Windsor, you can register aspects (when using method interception in Windsor as AOP) using code instead of applying attributes to classes. Is the same possible in Postsharp? It's a preference things, but prefer to have aspects matched to interfaces/objects in one place, as opposed to attributes all over.
Update:
Curious if I can assign aspects to interfaces/objects similiar to this:
container.Register(
Component
.For<IService>()
.ImplementedBy<Service>()
.Interceptors(InterceptorReference.ForType<LoggingAspect>()).Anywhere
);
If you could do this, you would have the option of NOT having to place attributes on assemblies/class/methods to apply aspects. I can then have one code file/class that contains which aspects are applied to which class/methods/etc.
Yes. You can either use multicasting (http://www.sharpcrafters.com/blog/post/Day-2-Applying-Aspects-with-Multicasting-Part-1.aspx , http://www.sharpcrafters.com/blog/post/Day-3-Applying-Aspects-with-Multicasting-Part-2.aspx) or you can use aspect providers (http://www.sharpcrafters.com/blog/post/PostSharp-Principals-Day-12-e28093-Aspect-Providers-e28093-Part-1.aspx , http://www.sharpcrafters.com/blog/post/PostSharp-Principals-Day-13-e28093-Aspect-Providers-e28093-Part-2.aspx).
Example:
using System;
using PostSharp.Aspects;
using PostSharp.Extensibility;
[assembly: PostSharpInterfaceTest.MyAspect(AttributeTargetTypes = "PostSharpInterfaceTest.Interface1", AttributeInheritance = MulticastInheritance.Multicast)]
namespace PostSharpInterfaceTest
{
class Program
{
static void Main(string[] args)
{
Example e = new Example();
Example2 e2 = new Example2();
e.DoSomething();
e2.DoSomething();
Console.ReadKey();
}
}
class Example : Interface1
{
public void DoSomething()
{
Console.WriteLine("Doing something");
}
}
class Example2 : Interface1
{
public void DoSomething()
{
Console.WriteLine("Doing something else");
}
}
interface Interface1
{
void DoSomething();
}
[Serializable]
class MyAspect : OnMethodBoundaryAspect
{
public override void OnEntry(MethodExecutionArgs args)
{
Console.WriteLine("Entered " + args.Method.Name);
}
}
}
I recommend that if you have complex requirements for determining which types get certain aspects that you consider creating an aspect provider instead.
Have a look at LOOM.NET, there you have a post compiler and a runtime weaver. With the later one you are able to archive exactly what you want.
It should be possible to use the PostSharp XML configuration. The XML configuration is the unification of the Plug-in and Project models in the project loader.
Description of .psproj could be found at http://www.sharpcrafters.com/blog/post/Configuring-PostSharp-Diagnostics-Toolkits.aspx.
Note, that I've only seen examples how PostSharp Toolkits use this XML configuration.
But it should work for custom aspects the same way.
Warning: I've noticed that installation of a PostSharp Toolkit from Nuget overwrites existing psproj file. So do not forget to back up it.

Architecture of some reusable code

I am writing a number of small, simple applications which share a common structure and need to do some of the same things in the same ways (e.g. logging, database connection setup, environment setup) and I'm looking for some advice in structuring the reusable components. The code is written in a strongly and statically typed language (e.g. Java or C#, I've had to solve this problem in both). At the moment I've got this:
abstract class EmptyApp //this is the reusable bit
{
//various useful fields: loggers, db connections
abstract function body()
function run()
{
//do setup
this.body()
//do cleanup
}
}
class theApp extends EmptyApp //this is a given app
{
function body()
{
//do stuff using some fields from EmptyApp
}
function main()
{
theApp app = new theApp()
app.run()
}
}
Is there a better way? Perhaps as follows? I'm having trouble weighing the trade-offs...
abstract class EmptyApp
{
//various fields
}
class ReusableBits
{
static function doSetup(EmptyApp theApp)
static function doCleanup(EmptyApp theApp)
}
class theApp extends EmptyApp
{
function main()
{
ReusableBits.doSetup(this);
//do stuff using some fields from EmptyApp
ReusableBits.doCleanup(this);
}
}
One obvious tradeoff is that with option 2, the 'framework' can't wrap the app in a try-catch block...
I've always favored re-use through composition (your second option) rather than inheritance (your first option).
Inheritance should only be used when there is a relationship between the classes rather than for code reuse.
So for your example I would have multiple ReusableBits classes each doing 1 thing that each application a make use of as/when required.
This allows each application to re-use the parts of your framework that are relevant for that specific application without being forced to take everything, Allowing the individual applications more freedom. Re-use through inheritance can sometimes become very restrictive if you have some applications in the future that don't exactly fit into the structure you have in mind today.
You will also find unit testing and test driven development much easier if you break your framework up into separate utilities.
Why not make the framework call onto your customisable code ? So your client creates some object, and injects it into the framework. The framework initialises, calls setup() etc., and then calls your client's code. Upon completion (or even after a thrown exception), the framework then calls cleanup() and exits.
So your client would simply implement an interface such as (in Java)
public interface ClientCode {
void runClientStuff(); // for the sake of argument
}
and the framework code is configured with an implementation of this, and calls runClientStuff() whenever required.
So you don't derive from the application framework, but simply provide a class conforming to a particular contract. You can configure the application setup at runtime (e.g. what class the client will provide to the app) since you're not deriving from the app and so your dependency isn't static.
The above interface can be extended to have multiple methods, and the application can call the required methods at different stages in the lifecycle (e.g. to provide client-specific setup/cleanup) but that's an example of feature creep :-)
Remember, inheritance is only a good choice if all the object that are inheriting reuse the code duo to their similarities. or if you want callers to be able to interact with them in the same fission.
if what i just mentioned applies to you then based on my experience its always better to have the common logic in your base/abstract class.
this is how i would re-write your sample app in C#.
abstract class BaseClass
{
string field1 = "Hello World";
string field2 = "Goodbye World";
public void Start()
{
Console.WriteLine("Starting.");
Setup();
CustomWork();
Cleanup();
}
public virtual void Setup()
{Console.WriteLine("Doing Base Setup.");}
public virtual void Cleanup()
{Console.WriteLine("Doing Base Cleanup.");}
public abstract void CustomWork();
}
class MyClass : BaseClass
{
public override void CustomWork()
{Console.WriteLine("Doing Custome work.");}
public override void Cleanup()
{
Console.WriteLine("Doing Custom Cleanup");
//You can skip the next line if you want to replace the
//cleanup code rather than extending it
base.Cleanup();
}
}
void Main()
{
MyClass worker = new MyClass();
worker.Start();
}