CodenameOne TestRecorder How-To run test - testing

because I think I recommend CodenameOne to be used for development I try to investigate deeper into it. I just tried out the Test Recorder which generated a test class.
Now my question: How-to use this test class? Do I have to call the test method from the existing UI code using e.g. a button to start it?
Generated code:
public class RegisterUserATest extends AbstractTest {
public boolean runTest() throws Exception {
clickButtonByName("Register");
keyPress(16);
keyPress(65);
waitFor(112);
keyPress(65);
setText("Name", "A");
keyPress(16);
keyPress(65);
waitFor(113);
keyPress(16);
waitFor(1);
keyPress(97);
setText("Email", "");
setText("Password", "A");
clickButtonByName("Register");
return true;
}
}
I think the solution is very easy but I cannot see it.

If this is on NetBeans right click the project and select "Test". On IntelliJ/IDEA it's under Codename One -> Run Tests.
Notice that the latter has a bug in it that will be fixed in the release coming tomorrow (October 7th 2016).

Related

Plugin that runs tests based on file of user

I am developing a Plugin for IntelliJ for teaching purposes, where students write some code and the teacher can write tests and the students can run those tests and see if they are doing it all correctly. It would be great if I would get the file the user is writing in as a java class so that I can run the functions of that class from within another function and test it as if I would have written it.
What I have as of now:
In the Main Toolbar I have a button, where the students should be able to run the tests. I have a class that extends AnAction, now I have no Idea what I should write in it:
#Override
public void actionPerformed(AnActionEvent e) {
}
I have been going through the IntelliJ documentation for some time now and as by now I do not get any further. I sure hope that the experienced developers that can be found here can manybe give me a hint or two.
Thanks a lot in advance :)
If I understand correctly, the students would be programming within a project within IntelliJ?
Then you can get the path to the project that they are working on using the AnActionEvent event.
Project project = event.getProject();
String projectBasePath = project.getBasePath();
You could use this to send the entire src folder to your computer and do what it is that you need to do there?
But, it also sounds like you would want the students to run the test functions on their side via the plugin. In that case, one option that I know of is to again use the project.getBasePath(), or get them to select a file using a GUI, and then use ProcessBuilder to compile, run, test, etc their Java classes. You can run any Windows / shell command this way and pipe the output into the IDE, or your own tool window.
public void actionPerformed(AnActionEvent event) {
Project project = event.getProject();
String projectBasePath = project.getBasePath();
ProcessBuilder pb = new ProcessBuilder();
pb.directory(projectBasepath);
pb.command("cmd", "/k", "javac src\*.java")
pb.redirectErrorStream(true);
Process process = pb.start();
BufferedReader reader = new BufferedReader(newInputStreamReader(process.getInputStream()));
String line;
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
int exitCode = process.waitFor();
System.out.println("\nExited with error code : " + exitCode);
... // anything else you need to do
}
Let me know if this makes sense - maybe I can help you out more if you give me more specific questions.

Howto tell PowerBuilder to pass options to a JVM when starting?

What I want to do?
I want to create and consume java objects in PowerBuilder and call methods on it. This should happen with less overhead possible.
I do not want to consume java webservices!
So I've a working sample in which I can create a java object, call a method on this object and output the result from the called method.
Everything is working as expected. I'm using Java 1.8.0_31.
But now I want to attach my java IDE (IntelliJ) to the running JVM (started by PowerBuilder) to debug the java code which gets called by PowerBuilder.
And now my question.
How do I tell PowerBuilder to add special options when starting the JVM?
In special I want to add the following option(s) in some way:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
The JVM is created like following:
LONG ll_result
inv_java = CREATE JavaVM
ll_result = inv_java.CreateJavaVM("C:\Development\tms java\pbJavaTest", FALSE)
CHOOSE CASE ll_result
CASE 1
CASE 0
CASE -1
MessageBox ( "", "jvm.dll was not found in the classpath.")
CASE -2
MessageBox ( "", "pbejbclient90.jar file was not found." )
CASE ELSE
MessageBox ( "", "Unknown result (" + String (ll_result ) +")" )
END CHOOSE
In the PowerBuilder help I found something about overriding the static registry classpath. There is something written about custom properties which sounds like what I'm looking for.
But there's no example on how to add JVM options to override default behavior.
Does anyone have a clue on how to tell PowerBuilder to use my options?
Or does anyone have any advice which could guide me in the right direction?
Update 1
I found an old post which solved my initial issue.
If someone else want to know how it works take a look at this post:
http://nntp-archive.sybase.com/nntp-archive/action/article/%3C46262213.6742.1681692777#sybase.com%3E
Hi, you need to set some windows registry entries.
Under HKEY_LOCAL_MACHINE\SOFTWARE\Sybase\Powerbuilder\9.0\Java, there
are two folders: PBIDEConfig and PBRTConfig. The first one is used when
you run your application from within the IDE, and the latter is used
when you run your compiled application. Those two folders can have
PBJVMconfig and PBJVMprops folders within them.
PBJVMconfig is for JVM configuration options such as -Xms. You have to
specify incremental key values starting from "0" by one, and one special
key "Count" to tell Powerbuilder how many options exists to enumerate.
PBJVMprops is for all -D options. You do not need to specify -D for
PBJVMProps, just the name of the property and its value, and as many
properties as you wish.
Let me give some examples:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Sybase\PowerBuilder\9.0\Java\PBIDEConfig\PBJVMprops]
"java.security.auth.login.config"="auth.conf"
"user.language"="en"
[HKEY_LOCAL_MACHINE\SOFTWARE\Sybase\PowerBuilder\9.0\Java\PBRTConfig\PBJVMconfig]
"0"="-client"
"1"="-Xms128m"
"2"="-Xmx512m"
"Count"="3"
[HKEY_LOCAL_MACHINE\SOFTWARE\Sybase\PowerBuilder\9.0\Java\PBRTConfig\PBJVMprops]
"java.security.auth.login.config"="auth.conf"
"user.language"="en"
Regards,
Gokhan Demir
But now there's another issue...
PB isn't able to create EJB Proxies for my sample class which is really simple with java 1.8.0_31. They were created with the default version, which is 1.6.0_24.
public class Simple
{
public Simple()
{
}
public static String getValue()
{
return "blubber";
}
public int getInt32Value()
{
return 123456;
}
public double getDoubleVaue()
{
return 123.123;
}
public static void main(String[] args)
{
System.out.println(Simple.getValue());
}
}
The error is the following. :D
---------- Deploy: Deploy of project p_genapp_ejbclientproxy (15:35:18)
Retrieving PowerBuilder Proxies from EJB...
Generation Errors: Error: class not found: (
Deployment Error: No files returned for package/component 'Simple'. Error code: Unknown. Proxy was not created.
Done.
---------- Finished Deploy of project p_genapp_ejbclientproxy (15:35:19)
So the whole way isn't a option because we do not want to change the JAVA settings in PB back and forth just to generate new EJB Proxies for changed JAVA objects in the future...
So one option to test will be creating COM wrappers for JAVA classes to use them in PB...

Mockolate Verify Error: Illegal override.. after Flex SDK 4.10 update

Since we upgraded the flex sdk in our application to 4.10 we've been running into Verify Errors while running unit tests that use mockolate.
They seem to occur when mocking an interface where a ByteArray is used in a method signature.
Example interface:
public interface IFileSystemHelper {
function loadFileContents(path:String):ByteArray;
}
Example test class:
public class SomeTest {
[Rule]
public var mockolateRule:MockolateRule = new MockolateRule();
[Mock]
public var fileHelper:IFileSystemHelper;
public function SomeTest() {
}
[Test]
public function testMethod():void {
// ...
}
}
When compiling and running the test with flexmojos 6.0.1 the following error is thrown:
VerifyError: Error #1053: Illegal override of
IFileSystemHelper8F2B5D281827800A824B85B588C6F2A08AE814ED in
mockolate.generated.IFileSystemHelper8F2B5D281827800A824B85B588C6F2A08AE814ED
My initial suspicion was an sdk version problem with playerglobal (or airglobal in our case) so i recompiled mockolate (and flexunit) with sdk 4.10, without any result.
The only thing that seems to work is to remove the ByteArray type from the method signature... but that's not really an option :-) (and this has never been a problem before)
Is there anyone who has had a similar issue?
Thanks
This problem usually occurs when compiling different parts of your application with different versions of the sdk.
I would recommend to have a look at the output of "mvn dependency:tree" as this should output all dependencies (direct and transitive ones). Perhaps this will help you find where the wrong version is comming from.

Selenium Grid on Multiple Browsers: should each test case have separate class for each browser?

I'm trying to put together my first Data Driven Test Framework that runs tests through Selenium Grid/WebDriver on multiple browsers. Right now, I have each test case in it's own class, and I parametrize the browser, so it runs each test case once with each browser.
Is this common on big test frameworks? Or, should each test case be copied and fine tuned to each browser in it's own class? So, if I'm testing chrome, firefox, and IE, should there be classes for each, like: "TestCase1Chrome", "TestCase1FireFox", "TestCase1IE"? Or just "TestCase1" and parametrize the test to run 3 times with each browser? Just wondering how others do it.
Parameterizing the tests into a single class per test case makes it easier to maintain the non-browser specific code, while duplicating classes, one for each browser case, makes it easier to maintain the browser-specific code. When I say browser specific code, for example, clicking an item. On ChromeDriver, you cannot click in the middle of some elements, where on FirefoxDriver, you can. So, you potentially need two different blocks of code just to click an element (when it's not clickable in the middle).
For those of you that are employed QA Engineers that use Selenium, what would be best practice here?
I am currently working on a project which runs around 75k - 90k tests on daily basis. We pass the browser as a parameter to the tests. Reasons being:
As you mentioned in your question, this helps in maintenance.
We don't see too many browser-specific code. If you are having too much of browser specific code, then I would say there is a problem with the webdriver itself. Because, one of the advantages of selenium/webdriver is write code once and run it against any supported browser.
The difference I see between my code structure and the one you mentioned in question is, I don't have a test class for each test case. Tests are divided based on the features that I test and each feature will have a class. And that class will hold all the tests as methods. I use testNG so that these methods can be invoked in parallel. May be this won't suite your AUT.
If you keep the code structure that you mention in the question, sooner or later maintaining it will become a nightmare. Try to stick to the rule: the same test code (written once) for all browsers (environments).
This condition will force you to solve two issues:
1) how to run the tests for all chosen browsers
2) how to apply specific browser workarounds without polluting the test code
Actually, this seems to be your question.
Here is how I solved the first issue.
First, I defined all the environments that I am going to test. I call 'environments' all the conditions under which I want to run my tests: browser name, version number, OS, etc. So, separately from test code, I created an enum like this:
public enum Environments {
FF_18_WIN7("firefox", "18", Platform.WINDOWS),
CHR_24_WIN7("chrome", "24", Platform.WINDOWS),
IE_9_WIN7("internet explorer", "9", Platform.WINDOWS)
;
private final DesiredCapabilities capabilities;
private final String browserName;
private final String version;
private final Platform platform;
Environments(final String browserName, final String version, final Platform platform) {
this.browserName = browserName;
this.version = version;
this.platform = platform;
capabilities = new DesiredCapabilities();
}
public DesiredCapabilities capabilities() {
capabilities.setBrowserName(browserName);
capabilities.setVersion(version);
capabilities.setPlatform(platform);
return this.capabilities;
}
public String browserName() {
return browserName;
}
}
It's easy to modify and add environments whenever you need to. As you can notice, I am using this to create and retrieve the DesiredCapabilities that later will be used to create a specific WebDriver.
In order to make the tests run for all the defined environments, I used JUnit's (4.10 in my case) org.junit.experimental.theories:
#RunWith(MyRunnerForSeleniumTests.class)
public class MyWebComponentTestClassIT {
#Rule
public MySeleniumRule selenium = new MySeleniumRule();
#DataPoints
public static Environments[] enviroments = Environments.values();
#Theory
public void sample_test(final Environments environment) {
Page initialPage = LoginPage.login(selenium.driverFor(environment), selenium.getUserName(), selenium.getUserPassword());
// your test code here
}
}
The tests are annotated as #Theory (not as #Test, like in normal JUnit tests) and are passed a parameter. Each test will run then for all the defined values of this parameter, which should be an array of values annotated as #DataPoints. Also, you should use a runner that extends from org.junit.experimental.theories.Theories. I use org.junit.rules to prepare my tests, putting there all the necessary plumbing. As you can see I get the specific capabilities driver through the Rule, too. Though you could use the following code right in your test:
RemoteWebDriver driver = new RemoteWebDriver(new URL(some_url_string), environment.capabilities());
The point is that having it in the Rule you write the code once and use it for all your tests.
As for Page class, it is a class where I put all the code that uses driver's functionality (find an element, navigate, etc.). This way, again, the test code stays neat and clear and, again, you write it once and use it in all your tests.
So, this is the solution for the first issue. (I know that you can do a similar thing with TestNG, but I didn't try it.)
To solve the second issue, I created a special package where I keep all the code of browser specific workarounds. It consists of an abstract class, e.g. BrowserSpecific, that contains the common code which happens to be different (or have a bug) in some browser. In the same package I have classes specific for every browser used in tests and each of them extends BrowserSpecific.
Here is how it works for the Chrome driver bug that you mention. I create a method clickOnButton in BrowserSpecific with the common code for the affected behaviour:
public abstract class BrowserSpecific {
protected final RemoteWebDriver driver;
protected BrowserSpecific(final RemoteWebDriver driver) {
this.driver = driver;
}
public static BrowserSpecific aBrowserSpecificFor(final RemoteWebDriver driver) {
BrowserSpecific browserSpecific = null;
if (Environments.FF_18_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new FireFoxSpecific(driver);
}
if (Environments.CHR_24_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new ChromeSpecific(driver);
}
if (Environments.IE_9_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new InternetExplorerSpecific(driver);
}
return browserSpecific;
}
public void clickOnButton(final WebElement button) {
button.click();
}
}
and then I override this method in the specific class, e.g. ChromeSpecific, where I place the workaround code:
public class ChromeSpecific extends BrowserSpecific {
ChromeSpecific(final RemoteWebDriver driver) {
super(driver);
}
#Override
public void clickOnButton(final WebElement button) {
// This is the Chrome workaround
String script = MessageFormat.format("window.scrollTo(0, {0});", button.getLocation().y);
driver.executeScript(script);
// Followed by common behaviour of all the browsers
super.clickOnButton(button);
}
}
When I have to take into account the specific behaviour of some browser, I do the following:
aBrowserSpecificFor(driver).clickOnButton(logoutButton);
instead of:
button.click();
This way, in my common code, I can identify easily where the workaround has been applied and I keep the workarounds isolated from the common code. I find it easy to maintain, as the bugs are usually being solved and the workarounds may or should be changed or eliminated.
One last word about executing the tests. As you are going to use Selenium Grid you will want to use the possibility to run the tests in parallel, so remember to configure this feature for your JUnit tests (available since v. 4.7).
We use testng in our organization and we use the parameter option that testng gives to specify the enviroment, i.e. the browser to use, the machine to run on and any other config that is required for env config. The browsername is sent through the xml file which controls what needs to run and where. It is set as a global variable. What we have done as an extra is, we have our custom annotations which can override these global variables i.e. if a test is very specifically only to be run on chrome and no other browser, then we specify the same on the custom annotation. So, no matter even if the parameter is say run on FF, if it is annotated with chrome, it would always run on chrome.
I somehow believe making one class for each browser is not a good idea. Imagine the flow changes or there is a bit of here and there and you have 3 classes to change instead of one. And if the number of browsers increase, then one more class.
What I would suggest is to have code that is browserspecific to be extracted out. So, if the click behavior is browser specific, then override to it to do appropriate checks or failure handlings based on browsers.
I do it like this but keep in mind that this is pure WebDriver without the Grid or RC in mind:
// Utility class snippet
// Test classes import this with: import static utility.*;
public static WebDriver driver;
public static void initializeBrowser( String type ) {
if ( type.equalsIgnoreCase( "firefox" ) ) {
driver = new FirefoxDriver();
} else if ( type.equalsIgnoreCase( "ie" ) ) {
driver = new InternetExplorerDriver();
}
driver.manage().timeouts().implicitlyWait( 10000, TimeUnit.MILLISECONDS );
driver.manage().window().setPosition(new Point(200, 10));
driver.manage().window().setSize(new Dimension(1200, 800));
}
Now, using JUnit 4.11+ your parameters file needs to look something like this:
firefox, test1, param1, param2
firefox, test2, param1, param2
firefox, test3, param1, param2
ie, test1, param1, param2
ie, test2, param1, param2
ie, test3, param1, param2
Then, using a single .CSV parameterized test class (that you intend to start multiple browser types with), in the #Before annotated method, do this:
If the current parameter test is the first test of this browser type, and no already open windows exist, open a new browser window of the current type.
If a browser is already open and the browser type is the same, then just re-use the same driver object.
if a browser is open of a different type that the current test, then close the browser and re-open a browser of the correct type.
Of course, my answer doesn't tell you how to handle the parameters: I leave that for you to figure out.

CakePhp ControllerTest seems to be deleting Model Table

I'm just starting to use Phpunit with CakePhp2.0 when I run my first controller test against a very simple Model Items (id, title)
./Console/cake test app Controller/ItemsController
I haven't added any other tests than those from 'cake bake;. The tests pass, however, it blows away the associated item table.
I have the latest 2.x version.
Dan,
I ran into this problem myself. In your test class add:
class TestControllerTest extends ControllerTestCase {
public $dropTables = false;
}
Did you make the correct testing DB configuration in app/Config/database.php ?
There is a "$test" property there, that shows which database should Cake use for testing.
If it is the same as your default configuration (or inexistent) it will be pointing to your default database.