Invoke RFT method from java class - rft

We are building an excel based IBM RFT framework, for which we want to invoke RFT individual methods(not the test script that can be executed from command line) from an external java file.
Anyone has an insight on this?
Thanks in Advance.
-Sreenisha Sreenivasan

How about making RFT script as the driver script and reading the excel and executing the method..
public void testMain(Object[] args)
{
String method = getNextAction();
TestObject target = getNextTarget();
//Using RFT's method here, Own Implementation preferred
FtReflection.invokeMethod(method, target);
}
//Get the next object to perform action on
TestObject getNextTarget()
{
//Here do the operation of finding the object that may be from map or obtained using find() api of rft
return untitledNotepadwindow();
}
//Get the next action to be performed, this should actually return an
//object that has a name to invoke , and arguments to be passed.
String getNextAction()
{
return "close";
}

Related

How to Take Screenshot when TestNG Assert fails?

String Actualvalue= d.findElement(By.xpath("//[#id=\"wrapper\"]/main/div[2]/div/div[1]/div/div[1]/div[2]/div/table/tbody/tr[1]/td[1]/a")).getText();
Assert.assertEquals(Actualvalue, "jumlga");
captureScreen(d, "Fail");
The assert should not be put before your capture screen. Because it will immediately shutdown the test process so your code
captureScreen(d, "Fail");
will be not reachable
This is how i usually do:
boolean result = false;
try {
// do stuff here
result = true;
} catch(Exception_class_Name ex) {
// code to handle error and capture screen shot
captureScreen(d, "Fail");
}
# then using assert
Assert.assertEquals(result, true);
1.
A good solution will be is to use a report framework like allure-reports.
Read here:allure-reports
2.
We don't our tests to be ugly by adding try catch in every test so we will use Listeners which are using an annotations system to "Listen" to our tests and act accordingly.
Example:
public class listeners extends commonOps implements ITestListener {
public void onTestFailure(ITestResult iTestResult) {
System.out.println("------------------ Starting Test: " + iTestResult.getName() + " Failed ------------------");
if (platform.equalsIgnoreCase("web"))
saveScreenshot();
}
}
Please note I only used the relevant method to your question and I suggest you read here:
TestNG Listeners
Now we will want to take a screenshot built in method by allure-reports every time a test fails so will add this method inside our listeners class
Example:
#Attachment(value = "Page Screen-Shot", type = "image/png")
public byte[] saveScreenshot(){
return ((TakesScreenshot)driver).getScreenshotAs(OutputType.BYTES);
}
Test example
#Listeners(listeners.class)
public class myTest extends commonOps {
#Test(description = "Test01: Add numbers and verify")
#Description("Test Description: Using Allure reports annotations")
public void test01_myFirstTest(){
Assert.assertEquals(result, true)
}
}
Note we're using at the beginning of the class an annotation of #Listeners(listeners.class) which allows our listeners to listen to our test, please mind the (listeners.class) can be any class you named your listeners.
The #Description is related to allure-reports and as the code snip suggests you can add additional info about the test.
Finally, our Assert.assertEquals(result, true) will take a screen shot in case the assertion fails because we enabled our listener.class to it.

How to test with Response.OnCompleted delegate in a finally block

I have the following netcore 2.2 controller method that I am trying to write an xUnit integration test for:
private readonly ISoapSvc _soapSvc;
private readonly IRepositorySvc _repositorySvc;
public SnowConnectorController(ISoapSvc soapSvc, IRepositorySvc repositorySvc)
{
_soapSvc = soapSvc;
_repositorySvc = repositorySvc;
}
[Route("accept")]
[HttpPost]
[Produces("text/xml")]
public async Task<IActionResult> Accept([FromBody] XDocument soapRequest)
{
try
{
var response = new CreateRes
{
Body = new Body
{
Response = new Response
{
Status = "Accepted"
}
}
};
return Ok(response);
}
finally
{
// After the first API call completes
Response.OnCompleted(async () =>
{
// Run the close method
await Close(soapRequest);
});
}
}
The catch block runs and does the things it needs to, then the finally block runs and does things it needs to do after the request in the catch finishes per design.
Close has been both a private method . It started as a public controller method but I don't need to expose it for function so moved it to private method status.
Here's an integration test I started with the intention of just testing the try portion of the code:
[Fact]
public async Task AlwaysReturnAcceptedResponse()
{
// Arrange------
// Build mocks so that we can inject them in our system under tests constructor
var mockSoapSvc = new Mock<ISoapSvc>();
var mockRepositorySvc = new Mock<IRepositorySvc>();
// Build system under test(sut)
var sut = new SnowConnectorController(mockSoapSvc.Object, mockRepositorySvc.Object);
var mockRequest = XDocument.Load("..\\..\\..\\mockRequest.xml");
// Act------
// Form and send test request to test system
var actualResult = await sut.Accept(mockRequest);
var actualValue = actualResult.GetType().GetProperty("Value").GetValue(actualResult);
// Assert------
// The returned object from the method call should be of type CreateRes
Assert.IsType<CreateRes>(actualValue);
}
I am super new to testing... I've been writing the test and feeling my way through the problem. I started by entering the controller method not really knowing where it would go. The test works through the try method, and then an exception is thrown once it hits the delegate in the finally block.
It looks like my test will have to run through to the results of the finally block unless there is a way to tell it to stop with the catch blocks execution?
That's fine, i'm learning, but the problem with that approach for me now is that the HttpResponse's Response.OnCompleted delegate in the finally block returns null when my test is running and I haven't been successful at figuring out what I can do to not make it null - because it is null, it throws this when my unit test is executing -
System.NullReferenceException: 'Object reference not set to an instance of an object.'
*One thought that occurred was that if I was to make the private Close method a public controller method, and then make the Accept method not have the finally block, I could create a third controller method that does the try finally action by running the two controller methods and then just test the individual controller methods that are strung together with the third. However, it doesn't feel right because I would be exposing methods just for the sake of unit testing and I don't need Close to be exposed.
If the above idea is not the right approach, I am wondering what is, and if I just need to test through end to end, how I would get over the null httpresponse?
Any ideas would be appreciated. Thank you, SO community!
EDIT - Updated Test that works after the accepted answer was implemented. Thanks!
[Fact]
public async Task AlwaysReturnAcceptedResponse()
{
// Arrange------
// Build mocks so that we can inject them in our system under tests constructor
var mockSoapSvc = new Mock<ISoapSvc>();
var mockRepositorySvc = new Mock<IRepositorySvc>();
// Build system under test(sut)
var sut = new SnowConnectorController(mockSoapSvc.Object, mockRepositorySvc.Object)
{
// Supply mocked ControllerContext and HttpContext so that finally block doesnt fail test
ControllerContext = new ControllerContext
{
HttpContext = new DefaultHttpContext()
}
};
var mockRequest = XDocument.Load("..\\..\\..\\mockRequest.xml");
// Act------
// Form and send test request to test system
var actualResult = await sut.Accept(mockRequest);
var actualValue = actualResult.GetType().GetProperty("Value").GetValue(actualResult);
// Assert------
// The returned object from the method call should be of type CreateRes
Assert.IsType<CreateRes>(actualValue);
}
Curious what you are doing in the Close method against the input parameter.
Does it have to happen after response is being sent? It might not always happen as you would expect, see here.
Regardless though, during runtime asp.net core runtime sets a lot of properties on the controller including ControllerContext, HttpContext, Request, Response etc.
But those won't be available in unit testing since there is no asp.net core runtime there.
If you really want to test this, you'll have to mock them.
Here is the ControllerBase source code.
As we can see, ControllerBase.Response simply returns ControllerBase.HttpContext.Response, and ControllerBase.HttpContext is a getter from ControllerBase.ControllerContext. This means you'll have to mock a ControllerContext (and the nested HttpContext as well as HttpResponse) and assign it to your controller in the setup phase.
Furthermore, the OnCompleted callback won't get called in unit test either. If you want to unit test that part, you'll have to trigger it manually.
Personally I think it's too much hassle beside the open bug I mentioned above.
I would suggest you move the closing logic (if it's really necessary) to a IDisposable scoped service and handle that in the Dispose instead - assuming it's not a computation heavy operation which can impact the response latency.

Programmatically execute Gatling tests

I want to use something like Cucumber JVM to drive performance tests written for Gatling.
Ideally the Cucumber features would somehow build a scenario dynamically - probably reusing predefined chain objects similar to the method described in the "Advanced Tutorial", e.g.
val scn = scenario("Scenario Name").exec(Search.search("foo"), Browse.browse, Edit.edit("foo", "bar")
I've looked at how the Maven plugin executes the scripts, and I've also seen mention of using an App trait but I can't find any documentation for the later and it strikes me that somebody else will have wanted to do this before...
Can anybody point (a Gatling noob) in the direction of some documentation or example code of how to achieve this?
EDIT 20150515
So to explain a little more:
I have created a trait which is intended to build up a sequence of, I think, ChainBuilders that are triggered by Cucumber steps:
trait GatlingDsl extends ScalaDsl with EN {
private val gatlingActions = new ArrayBuffer[GatlingBehaviour]
def withGatling(action: GatlingBehaviour): Unit = {
gatlingActions += action
}
}
A GatlingBehaviour would look something like:
object Google {
class Home extends GatlingBehaviour {
def execute: ChainBuilder =
exec(http("Google Home")
.get("/")
)
}
class Search extends GatlingBehaviour {...}
class FindResult extends GatlingBehaviour {...}
}
And inside the StepDef class:
class GoogleStepDefinitions extends GatlingDsl {
Given( """^the Google search page is displayed$""") { () =>
println("Loading www.google.com")
withGatling(Home())
}
When( """^I search for the term "(.*)"$""") { (searchTerm: String) =>
println("Searching for '" + searchTerm + "'...")
withGatling(Search(searchTerm))
}
Then( """^"(.*)" appears in the search results$""") { (expectedResult: String) =>
println("Found " + expectedResult)
withGatling(FindResult(expectedResult))
}
}
The idea being that I can then execute the whole sequence of actions via something like:
val scn = Scenario(cucumberScenario).exec(gatlingActions)
setup(scn.inject(atOnceUsers(1)).protocols(httpConf))
and then check the reports or catch an exception if the test fails, e.g. response time too long.
It seems that no matter how I use the 'exec' method it tries to instantly execute it there and then, not waiting for the scenario.
Also I don't know if this is the best approach to take, we'd like to build some reusable blocks for our Gatling tests that can be constructed via Cucumber's Given/When/Then style. Is there a better or already existing approach?
Sadly, it's not currently feasible to have Gatling directly start a Simulation instance.
Not that's it's not technically feasible, but you're just the first person to try to do this.
Currently, Gatling is usually in charge of compiling and can only be passed the name of the class to load, not an instance itself.
You can maybe start by forking io.gatling.app.Gatling and io.gatling.core.runner.Runner, and then provide a PR to support this new behavior. The former is the main entry point, and the latter the one can instanciate and run the simulation.
I recently ran into a similar situation, and did not want to fork gatling. And while this solved my immediate problem, it only partially solves what you are trying to do, but hopefully someone else will find this useful.
There is an alternative. Gatling is written in Java and Scala so you can call Gatling.main directly and pass it the arguments you need to run the Gatling Simulation you want. The problem is, the main explicitly calls System.exit so you have to also use a custom security manager to prevent it from actually exiting.
You need to know two things:
the class (with the full package) of the Simulation you want to run
example: com.package.your.Simulation1
the path where the binaries are compiled.
The code to run a Simulation:
protected void fire(String gatlingGun, String binaries){
SecurityManager sm = System.getSecurityManager();
System.setSecurityManager(new GatlingSecurityManager());
String[] args = {"--simulation", gatlingGun,
"--results-folder", "gatling-results",
"--binaries-folder", binaries};
try {
io.gatling.app.Gatling.main(args);
}catch(SecurityException se){
LOG.debug("gatling test finished.");
}
System.setSecurityManager(sm);
}
The simple security manager i used:
public class GatlingSecurityManager extends SecurityManager {
#Override
public void checkExit(int status){
throw new SecurityException("Tried to exit.");
}
#Override
public void checkPermission(Permission perm) {
return;
}
}
The problem is then getting the information you want out of the simulation after it has been run.

How do I sense if my unit test is a member of an ordered test and, if it is, which position in that ordered test it is at?

Environment:
I have a program - named CSIS - which I need to run a lot of automated tests on in Visual Studio 2010 using C#. I have a series of functions which need to be run in many different orders but which all start and end at the same 'home screen' of CSIS. The tests will either be run on their own as a single CodedUITest (.cs filetype) or as an ordered test (.orderedtest filetype).
Goal:
The goal is to open to the CSIS homepage once no matter which of the unit tests is run first and then, after all CodedUITests are finished, no matter which unit test is last, the automated test will close CSIS. I don't want to create a separate unit test to open CSIS to the homepage and another to close CSIS as this is very inconvenient for testers to use.
Current Solution Development:
UPDATE: The new big question is how do I get '[ClassInitialize]' to work?
Additional Thoughts:
UPDATE: I now just need ClassInitialize to execute code at the beginning and ClassCleanUp to execute code at the end of a test set.
If you would like the actual code let me know.
Research Update:
Because of Izcd's answer I was able to more accurately research the answer to my own question. I've found an answer online to my problem.
Unfortunately, I don't understand how to implement it in my code. I pasted the code as shown below in the 'Code' section of this question and the test runs fine except that it executes the OpenWindow() and CloseWindow() functions after each test instead of around the whole test set. So ultimately the code does nothing new. How do I fix this?
static private UIMap sharedTest = new UIMap();
[ClassInitialize]
static public void ClassInit(TestContext context)
{
Playback.Initialize();
try
{
sharedTest.OpenCustomerKeeper();
}
finally
{
Playback.Cleanup();
}
}
=====================================================================================
Code
namespace CSIS_TEST
{
//a ton of 'using' statements are here
public partial class UIMap
{
#region Class Initializization and Cleanup
static private UIMap sharedTest = new UIMap();
[ClassInitialize]
static public void ClassInit(TestContext context)
{
Playback.Initialize();
try
{
sharedTest.OpenWindow();
}
finally
{
Playback.Cleanup();
}
}
[ClassCleanup]
static public void ClassCleanup()
{
Playback.Initialize();
try
{
sharedTest.CloseWindow();
}
finally
{
Playback.Cleanup();
}
}
#endregion
Microsoft's unit testing framework includes ClassInitialise and ClassCleanUp attributes which can be used to indicate methods that execute functionality before and after a test run.
( http://msdn.microsoft.com/en-us/library/ms182517.aspx )
Rather than try and make the unit tests aware of their position, I would suggest it might be better to embed the opening and closing logic of the home screen within the aforementioned ClassInitialise and ClassCleanUp marked methods.
I figured out the answer after a very long process of asking questions on StackOverflow, Googling, and just screwing around with the code.
The answer is to use AssemblyInitialize and AssemblyCleanup and to write the code for them inside the DatabaseSetup.cs file which should be auto-generated in your project. You should find that there already is a AssemblyInitialize function in here but it is very basic and there is no AssemblyCleanup after it. All you need to do is create a static copy of your UIMap and use it inside the AssemblyInitialize to run your OpenWindow() code.
Copy the format of the AssemblyInitialize function to create an AssemblyCleanup function and add your CloseWindow() function.
Make sure your Open/CloseWindow functions only contains basic code (such as Process.Start/Kill) as any complex variables such as forms have been cleaned up already and won't work.
Here is the code in my DatabaseSetup.cs:
using System.Data;
using System.Data.Common;
using System.Configuration;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Microsoft.Data.Schema.UnitTesting;
using System.Windows.Input;
using Keyboard = Microsoft.VisualStudio.TestTools.UITesting.Keyboard;
using Mouse = Microsoft.VisualStudio.TestTools.UITesting.Mouse;
using MouseButtons = System.Windows.Forms.MouseButtons;
namespace CSIS_TEST
{
[TestClass()]
public class DatabaseSetup
{
static private UIMap uIMap = new UIMap();
static int count = 0;
[AssemblyInitialize()]
public static void InitializeAssembly(TestContext ctx)
{
DatabaseTestClass.TestService.DeployDatabaseProject();
DatabaseTestClass.TestService.GenerateData();
if(count < 1)
uIMap.OpenWindow();
count++;
}
[AssemblyCleanup()]
public static void InitializeAssembly()
{
uIMap.CloseWindow();
}
}
}

Using Rhino Mocks, why does invoking a mocked on a property during test initialization return Expected call #1, Actual call #0?

I currently have a test which tests the presenter I have in the MVP model. On my presenter I have a property which will call into my View, which in my test is mocked out. In the Initialization of my test, after I set my View on the Presenter to be the mocked View, I set my property on the Presenter which will call this method.
In my test I do not have an Expect.Call for the method I invoke, yet when I run I get this Rhino mock exception:
Rhino.Mocks.Exceptions.ExpectationViolationException: IView.MethodToInvoke(); Expected #1, Actual #0..
From what I understand with Rhino mocks, as long as I am invoking on the Mock outside the expecting block it should not be recording this. I would imagine the test to pass. Is there a reason it is not passing?
Below is some code to show my setup.
public class Presenter
{
public IView View;
public Presenter(IView view)
{
View = view
}
private int _property;
public int Property
get { return _property;}
set
{
_property = value;
View.MethodToInvoke();
}
}
... Test Code Below ...
[TestInitialize]
public void Initilize()
{
_mocks = new MockRepository();
_view = _mocks.StrictMock<IView>();
_presenter = new Presenter(_view);
_presenter.Property = 1;
}
[TestMethod]
public void Test()
{
Rhino.Mocks.With.Mocks(_mocks).Expecting(delegate
{
}).Verify(delegate
{
_presenter.SomeOtherMethod();
});
}
Why in the world would you want to test the same thing each time a test is run?
If you want to test that a specific thing happens, you should check that in a single test.
The pattern you are using now implies that you need to
- set up prerequisites for testing
- do behavior
- check that behavior is correct
and then repeat that several times in one test
You need to start testing one thing for each test, and that help make the tests clearer, and make it easier to use the AAA syntax.
There's several things to discuss here, but it certainly would be clearer if you did it something like:
[TestMethod]
ShouldCallInvokedMethodWhenSettingProperty()
{
var viewMock = MockRepository.GenerateMock<IView>()
var presenter = new Presenter(viewMock);
presenter.Property = 1;
viewMock.AssertWasCalled(view => view.InvokedMethod());
}
Read up more on Rhino Mocks 3.5 syntax here: http://ayende.com/Wiki/Rhino+Mocks+3.5.ashx
What exactly are you trying to test in the Test method?
You should try to avoid using strict mocks.
I suggest using the Rhino's AAA syntax (Arrange, Act, Assert).
The problem lied with me not understanding the record/verify that is going on with Strict mocks. In order to fix the issue I was having this is how I changed my TestInitilize function. This basicaly does a quick test on my intial state I'm setting up for all my tests.
[TestInitialize]
public void Initilize()
{
_mocks = new MockRepository();
_view = _mocks.StrictMock<IView>();
_presenter = new Presenter(_view);
Expect.Call(delegate { _presenter.View.InvokedMethod(); });
_mocks.ReplayAll();
_mocks.VerifyAll();
_mocks.BackToRecordAll();
_presenter.Property = 1;
}