What I want to do?
I want to create and consume java objects in PowerBuilder and call methods on it. This should happen with less overhead possible.
I do not want to consume java webservices!
So I've a working sample in which I can create a java object, call a method on this object and output the result from the called method.
Everything is working as expected. I'm using Java 1.8.0_31.
But now I want to attach my java IDE (IntelliJ) to the running JVM (started by PowerBuilder) to debug the java code which gets called by PowerBuilder.
And now my question.
How do I tell PowerBuilder to add special options when starting the JVM?
In special I want to add the following option(s) in some way:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
The JVM is created like following:
LONG ll_result
inv_java = CREATE JavaVM
ll_result = inv_java.CreateJavaVM("C:\Development\tms java\pbJavaTest", FALSE)
CHOOSE CASE ll_result
CASE 1
CASE 0
CASE -1
MessageBox ( "", "jvm.dll was not found in the classpath.")
CASE -2
MessageBox ( "", "pbejbclient90.jar file was not found." )
CASE ELSE
MessageBox ( "", "Unknown result (" + String (ll_result ) +")" )
END CHOOSE
In the PowerBuilder help I found something about overriding the static registry classpath. There is something written about custom properties which sounds like what I'm looking for.
But there's no example on how to add JVM options to override default behavior.
Does anyone have a clue on how to tell PowerBuilder to use my options?
Or does anyone have any advice which could guide me in the right direction?
Update 1
I found an old post which solved my initial issue.
If someone else want to know how it works take a look at this post:
http://nntp-archive.sybase.com/nntp-archive/action/article/%3C46262213.6742.1681692777#sybase.com%3E
Hi, you need to set some windows registry entries.
Under HKEY_LOCAL_MACHINE\SOFTWARE\Sybase\Powerbuilder\9.0\Java, there
are two folders: PBIDEConfig and PBRTConfig. The first one is used when
you run your application from within the IDE, and the latter is used
when you run your compiled application. Those two folders can have
PBJVMconfig and PBJVMprops folders within them.
PBJVMconfig is for JVM configuration options such as -Xms. You have to
specify incremental key values starting from "0" by one, and one special
key "Count" to tell Powerbuilder how many options exists to enumerate.
PBJVMprops is for all -D options. You do not need to specify -D for
PBJVMProps, just the name of the property and its value, and as many
properties as you wish.
Let me give some examples:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Sybase\PowerBuilder\9.0\Java\PBIDEConfig\PBJVMprops]
"java.security.auth.login.config"="auth.conf"
"user.language"="en"
[HKEY_LOCAL_MACHINE\SOFTWARE\Sybase\PowerBuilder\9.0\Java\PBRTConfig\PBJVMconfig]
"0"="-client"
"1"="-Xms128m"
"2"="-Xmx512m"
"Count"="3"
[HKEY_LOCAL_MACHINE\SOFTWARE\Sybase\PowerBuilder\9.0\Java\PBRTConfig\PBJVMprops]
"java.security.auth.login.config"="auth.conf"
"user.language"="en"
Regards,
Gokhan Demir
But now there's another issue...
PB isn't able to create EJB Proxies for my sample class which is really simple with java 1.8.0_31. They were created with the default version, which is 1.6.0_24.
public class Simple
{
public Simple()
{
}
public static String getValue()
{
return "blubber";
}
public int getInt32Value()
{
return 123456;
}
public double getDoubleVaue()
{
return 123.123;
}
public static void main(String[] args)
{
System.out.println(Simple.getValue());
}
}
The error is the following. :D
---------- Deploy: Deploy of project p_genapp_ejbclientproxy (15:35:18)
Retrieving PowerBuilder Proxies from EJB...
Generation Errors: Error: class not found: (
Deployment Error: No files returned for package/component 'Simple'. Error code: Unknown. Proxy was not created.
Done.
---------- Finished Deploy of project p_genapp_ejbclientproxy (15:35:19)
So the whole way isn't a option because we do not want to change the JAVA settings in PB back and forth just to generate new EJB Proxies for changed JAVA objects in the future...
So one option to test will be creating COM wrappers for JAVA classes to use them in PB...
because I think I recommend CodenameOne to be used for development I try to investigate deeper into it. I just tried out the Test Recorder which generated a test class.
Now my question: How-to use this test class? Do I have to call the test method from the existing UI code using e.g. a button to start it?
Generated code:
public class RegisterUserATest extends AbstractTest {
public boolean runTest() throws Exception {
clickButtonByName("Register");
keyPress(16);
keyPress(65);
waitFor(112);
keyPress(65);
setText("Name", "A");
keyPress(16);
keyPress(65);
waitFor(113);
keyPress(16);
waitFor(1);
keyPress(97);
setText("Email", "");
setText("Password", "A");
clickButtonByName("Register");
return true;
}
}
I think the solution is very easy but I cannot see it.
If this is on NetBeans right click the project and select "Test". On IntelliJ/IDEA it's under Codename One -> Run Tests.
Notice that the latter has a bug in it that will be fixed in the release coming tomorrow (October 7th 2016).
I have following test case that I want to debug in IntelliJ. I don't want to use #Optional("defaultValue") annotation because I want to debug the test with a real value that changes every time I debug. It is not handy to set default values every time I want to run tests.
#Test(parameters = { "param1"})
public void testExmaple(String param1){
//do something with param1
}
So, Is there are way to define the test data somewhere in intelliJ so that when I right-click and debug, it should pick the value i.e. param1 ? Or may be there is a testng plugin to do that ?
NOTE: I don't want to use command-line maven+surefire
Just configure the parameters part of the IntelliJ runner: https://www.jetbrains.com/help/idea/2016.1/run-debug-configuration-testng.html?origin=old_help#config
Is there a way to (easily) generate a HTML report that contains the tests results ? I am currently using JUnit in addition to Selenium for testing web apps UI.
PS: Given the project structure I am not supposed to use Ant :(
I found the above answers quite useful but not really general purpose, they all need some other major build system like Ant or Maven.
I wanted to generate a report in a simple one-shot command that I could call from anything (from a build, test or just myself) so I have created junit2html which can be found here: https://github.com/inorton/junit2html
You can install it by doing:
pip install junit2html
Alternatively for those using Maven build tool, there is a plugin called Surefire Report.
The report looks like this : Sample
If you could use Ant then you would just use the JUnitReport task as detailed here: http://ant.apache.org/manual/Tasks/junitreport.html, but you mentioned in your question that you're not supposed to use Ant.
I believe that task merely transforms the XML report into HTML so it would be feasible to use any XSLT processor to generate a similar report.
Alternatively, you could switch to using TestNG ( http://testng.org/doc/index.html ) which is very similar to JUnit but has a default HTML report as well as several other cool features.
You can easily do this via ant. Here is a build.xml file for doing this
<project name="genTestReport" default="gen" basedir=".">
<description>
Generate the HTML report from JUnit XML files
</description>
<target name="gen">
<property name="genReportDir" location="${basedir}/unitTestReports"/>
<delete dir="${genReportDir}"/>
<mkdir dir="${genReportDir}"/>
<junitreport todir="${basedir}/unitTestReports">
<fileset dir="${basedir}">
<include name="**/TEST-*.xml"/>
</fileset>
<report format="frames" todir="${genReportDir}/html"/>
</junitreport>
</target>
</project>
This will find files with the format TEST-*.xml and generate reports into a folder named unitTestReports.
To run this (assuming the above file is called buildTestReports.xml) run the following command in the terminal:
ant -buildfile buildTestReports.xml
Junit xml format is used outside of Java/Maven/Ant word.
Jenkins with http://wiki.jenkins-ci.org/display/JENKINS/xUnit+Plugin is a solution.
For the one shot solution I have found this tool that does the job:
https://www.npmjs.com/package/junit-viewer
junit-viewer --results=surefire-reports --save=file_location.html
--results= is directory with xml files (test reports)
I found xunit-viewer, which has deprecated junit-viewer mentioned by #daniel-kristof-kiss.
It is very simple, automatically recursively collects all relevant files in ANT Junit XML format and creates a single html-file with filtering and other sweet features.
I use it to upload test results from Travis builds as Travis has no other support for collecting standard formatted test results output.
There are multiple options available for generating HTML reports for Selenium WebDriver scripts.
1. Use the JUNIT TestWatcher class for creating your own Selenium HTML reports
The TestWatcher JUNIT class allows overriding the failed() and succeeded() JUNIT methods that are called automatically when JUNIT tests fail or pass.
The TestWatcher JUNIT class allows overriding the following methods:
protected void failed(Throwable e, Description description)
failed() method is invoked when a test fails
protected void finished(Description description)
finished() method is invoked when a test method finishes (whether passing or failing)
protected void skipped(AssumptionViolatedException e, Description
description)
skipped() method is invoked when a test is skipped due to a failed assumption.
protected void starting(Description description)
starting() method is invoked when a test is about to start
protected void succeeded(Description description)
succeeded() method is invoked when a test succeeds
See below sample code for this case:
import static org.junit.Assert.assertTrue;
import org.junit.Test;
public class TestClass2 extends WatchManClassConsole {
#Test public void testScript1() {
assertTrue(1 < 2); >
}
#Test public void testScript2() {
assertTrue(1 > 2);
}
#Test public void testScript3() {
assertTrue(1 < 2);
}
#Test public void testScript4() {
assertTrue(1 > 2);
}
}
import org.junit.Rule;
import org.junit.rules.TestRule;
import org.junit.rules.TestWatcher;
import org.junit.runner.Description;
import org.junit.runners.model.Statement;
public class WatchManClassConsole {
#Rule public TestRule watchman = new TestWatcher() {
#Override public Statement apply(Statement base, Description description) {
return super.apply(base, description);
}
#Override protected void succeeded(Description description) {
System.out.println(description.getDisplayName() + " " + "success!");
}
#Override protected void failed(Throwable e, Description description) {
System.out.println(description.getDisplayName() + " " + e.getClass().getSimpleName());
}
};
}
2. Use the Allure Reporting framework
Allure framework can help with generating HTML reports for your Selenium WebDriver projects.
The reporting framework is very flexible and it works with many programming languages and unit testing frameworks.
You can read everything about it at http://allure.qatools.ru/.
You will need the following dependencies and plugins to be added to your pom.xml file
maven surefire
aspectjweaver
allure adapter
See more details including code samples on this article:
http://test-able.blogspot.com/2015/10/create-selenium-html-reports-with-allure-framework.html
I have created a JUnit parser/viewer that runs directly in the browser. It supports conversion to JSON and the HTML report can be easily reused.
https://lotterfriends.github.io/online-junit-parser/
If you are still missing a feature feel free to create an issue on Github. :)
I'm trying to put together my first Data Driven Test Framework that runs tests through Selenium Grid/WebDriver on multiple browsers. Right now, I have each test case in it's own class, and I parametrize the browser, so it runs each test case once with each browser.
Is this common on big test frameworks? Or, should each test case be copied and fine tuned to each browser in it's own class? So, if I'm testing chrome, firefox, and IE, should there be classes for each, like: "TestCase1Chrome", "TestCase1FireFox", "TestCase1IE"? Or just "TestCase1" and parametrize the test to run 3 times with each browser? Just wondering how others do it.
Parameterizing the tests into a single class per test case makes it easier to maintain the non-browser specific code, while duplicating classes, one for each browser case, makes it easier to maintain the browser-specific code. When I say browser specific code, for example, clicking an item. On ChromeDriver, you cannot click in the middle of some elements, where on FirefoxDriver, you can. So, you potentially need two different blocks of code just to click an element (when it's not clickable in the middle).
For those of you that are employed QA Engineers that use Selenium, what would be best practice here?
I am currently working on a project which runs around 75k - 90k tests on daily basis. We pass the browser as a parameter to the tests. Reasons being:
As you mentioned in your question, this helps in maintenance.
We don't see too many browser-specific code. If you are having too much of browser specific code, then I would say there is a problem with the webdriver itself. Because, one of the advantages of selenium/webdriver is write code once and run it against any supported browser.
The difference I see between my code structure and the one you mentioned in question is, I don't have a test class for each test case. Tests are divided based on the features that I test and each feature will have a class. And that class will hold all the tests as methods. I use testNG so that these methods can be invoked in parallel. May be this won't suite your AUT.
If you keep the code structure that you mention in the question, sooner or later maintaining it will become a nightmare. Try to stick to the rule: the same test code (written once) for all browsers (environments).
This condition will force you to solve two issues:
1) how to run the tests for all chosen browsers
2) how to apply specific browser workarounds without polluting the test code
Actually, this seems to be your question.
Here is how I solved the first issue.
First, I defined all the environments that I am going to test. I call 'environments' all the conditions under which I want to run my tests: browser name, version number, OS, etc. So, separately from test code, I created an enum like this:
public enum Environments {
FF_18_WIN7("firefox", "18", Platform.WINDOWS),
CHR_24_WIN7("chrome", "24", Platform.WINDOWS),
IE_9_WIN7("internet explorer", "9", Platform.WINDOWS)
;
private final DesiredCapabilities capabilities;
private final String browserName;
private final String version;
private final Platform platform;
Environments(final String browserName, final String version, final Platform platform) {
this.browserName = browserName;
this.version = version;
this.platform = platform;
capabilities = new DesiredCapabilities();
}
public DesiredCapabilities capabilities() {
capabilities.setBrowserName(browserName);
capabilities.setVersion(version);
capabilities.setPlatform(platform);
return this.capabilities;
}
public String browserName() {
return browserName;
}
}
It's easy to modify and add environments whenever you need to. As you can notice, I am using this to create and retrieve the DesiredCapabilities that later will be used to create a specific WebDriver.
In order to make the tests run for all the defined environments, I used JUnit's (4.10 in my case) org.junit.experimental.theories:
#RunWith(MyRunnerForSeleniumTests.class)
public class MyWebComponentTestClassIT {
#Rule
public MySeleniumRule selenium = new MySeleniumRule();
#DataPoints
public static Environments[] enviroments = Environments.values();
#Theory
public void sample_test(final Environments environment) {
Page initialPage = LoginPage.login(selenium.driverFor(environment), selenium.getUserName(), selenium.getUserPassword());
// your test code here
}
}
The tests are annotated as #Theory (not as #Test, like in normal JUnit tests) and are passed a parameter. Each test will run then for all the defined values of this parameter, which should be an array of values annotated as #DataPoints. Also, you should use a runner that extends from org.junit.experimental.theories.Theories. I use org.junit.rules to prepare my tests, putting there all the necessary plumbing. As you can see I get the specific capabilities driver through the Rule, too. Though you could use the following code right in your test:
RemoteWebDriver driver = new RemoteWebDriver(new URL(some_url_string), environment.capabilities());
The point is that having it in the Rule you write the code once and use it for all your tests.
As for Page class, it is a class where I put all the code that uses driver's functionality (find an element, navigate, etc.). This way, again, the test code stays neat and clear and, again, you write it once and use it in all your tests.
So, this is the solution for the first issue. (I know that you can do a similar thing with TestNG, but I didn't try it.)
To solve the second issue, I created a special package where I keep all the code of browser specific workarounds. It consists of an abstract class, e.g. BrowserSpecific, that contains the common code which happens to be different (or have a bug) in some browser. In the same package I have classes specific for every browser used in tests and each of them extends BrowserSpecific.
Here is how it works for the Chrome driver bug that you mention. I create a method clickOnButton in BrowserSpecific with the common code for the affected behaviour:
public abstract class BrowserSpecific {
protected final RemoteWebDriver driver;
protected BrowserSpecific(final RemoteWebDriver driver) {
this.driver = driver;
}
public static BrowserSpecific aBrowserSpecificFor(final RemoteWebDriver driver) {
BrowserSpecific browserSpecific = null;
if (Environments.FF_18_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new FireFoxSpecific(driver);
}
if (Environments.CHR_24_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new ChromeSpecific(driver);
}
if (Environments.IE_9_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new InternetExplorerSpecific(driver);
}
return browserSpecific;
}
public void clickOnButton(final WebElement button) {
button.click();
}
}
and then I override this method in the specific class, e.g. ChromeSpecific, where I place the workaround code:
public class ChromeSpecific extends BrowserSpecific {
ChromeSpecific(final RemoteWebDriver driver) {
super(driver);
}
#Override
public void clickOnButton(final WebElement button) {
// This is the Chrome workaround
String script = MessageFormat.format("window.scrollTo(0, {0});", button.getLocation().y);
driver.executeScript(script);
// Followed by common behaviour of all the browsers
super.clickOnButton(button);
}
}
When I have to take into account the specific behaviour of some browser, I do the following:
aBrowserSpecificFor(driver).clickOnButton(logoutButton);
instead of:
button.click();
This way, in my common code, I can identify easily where the workaround has been applied and I keep the workarounds isolated from the common code. I find it easy to maintain, as the bugs are usually being solved and the workarounds may or should be changed or eliminated.
One last word about executing the tests. As you are going to use Selenium Grid you will want to use the possibility to run the tests in parallel, so remember to configure this feature for your JUnit tests (available since v. 4.7).
We use testng in our organization and we use the parameter option that testng gives to specify the enviroment, i.e. the browser to use, the machine to run on and any other config that is required for env config. The browsername is sent through the xml file which controls what needs to run and where. It is set as a global variable. What we have done as an extra is, we have our custom annotations which can override these global variables i.e. if a test is very specifically only to be run on chrome and no other browser, then we specify the same on the custom annotation. So, no matter even if the parameter is say run on FF, if it is annotated with chrome, it would always run on chrome.
I somehow believe making one class for each browser is not a good idea. Imagine the flow changes or there is a bit of here and there and you have 3 classes to change instead of one. And if the number of browsers increase, then one more class.
What I would suggest is to have code that is browserspecific to be extracted out. So, if the click behavior is browser specific, then override to it to do appropriate checks or failure handlings based on browsers.
I do it like this but keep in mind that this is pure WebDriver without the Grid or RC in mind:
// Utility class snippet
// Test classes import this with: import static utility.*;
public static WebDriver driver;
public static void initializeBrowser( String type ) {
if ( type.equalsIgnoreCase( "firefox" ) ) {
driver = new FirefoxDriver();
} else if ( type.equalsIgnoreCase( "ie" ) ) {
driver = new InternetExplorerDriver();
}
driver.manage().timeouts().implicitlyWait( 10000, TimeUnit.MILLISECONDS );
driver.manage().window().setPosition(new Point(200, 10));
driver.manage().window().setSize(new Dimension(1200, 800));
}
Now, using JUnit 4.11+ your parameters file needs to look something like this:
firefox, test1, param1, param2
firefox, test2, param1, param2
firefox, test3, param1, param2
ie, test1, param1, param2
ie, test2, param1, param2
ie, test3, param1, param2
Then, using a single .CSV parameterized test class (that you intend to start multiple browser types with), in the #Before annotated method, do this:
If the current parameter test is the first test of this browser type, and no already open windows exist, open a new browser window of the current type.
If a browser is already open and the browser type is the same, then just re-use the same driver object.
if a browser is open of a different type that the current test, then close the browser and re-open a browser of the correct type.
Of course, my answer doesn't tell you how to handle the parameters: I leave that for you to figure out.