toolprovider.getsystemjavacompiler() returns null - java-compiler-api

First, I am seeing a lot of questions about the use of the JavaCompilerAPI, I want to clarify that I am creating an on-line simulation builder that takes too many inputs from the user to precreate classes. That is why I am using a java compiler in order to write the classes using the user's inputs.
As for my problem, I have tested with some basic compiler programs, and am presently working of code found here: Dynamic Compiling Without Create Physical File
The compilation of the code is successful, however when I run the code,
ToolProvider.getSystemJavaCompiler();
returns null.
From other entries I understand one cause might be that the default java.home is JRE, so I added the line where I set java home to my JDK version:
System.setProperty("java.home", "C:\\Program Files (x86)\\Java\\jdk1.7.0_51;");
I have also added tools.jar to the folder with my program, and called the program specifying tools.jar in the classpath like so:
java -cp ".;tools.jar" Compiler
These approaches have not changed anything. Any ideas about what might be the problem?
import java.io.IOException;
import java.net.URI;
import java.util.Arrays;
import java.util.Locale;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.tools.JavaCompiler.CompilationTask;
import javax.tools.*;
public class Compiler {
static final Logger logger = Logger.getLogger(Compiler.class.getName());
static String sourceCode = "class HelloWorld{"
+ "public static void main (String args[]){"
+ "System.out.println (\"Hello, dynamic compilation world!\");"
+ "}"
+ "}";
public void doCompilation() {
System.out.println(System.getProperty("java.home"));
System.setProperty("java.home", "C:\\Program Files (x86)\\Java\\jdk1.7.0_51;");
System.out.println(System.getProperty("java.home"));
SimpleJavaFileObject fileObject = new DynamicJavaSourceCodeObject("HelloWorld",sourceCode);
JavaFileObject javaFileObjects[] = new JavaFileObject[]{fileObject};
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
System.out.println(compiler);
StandardJavaFileManager stdFileManager = compiler.getStandardFileManager(null, Locale.getDefault(), null);
...

Related

How do I use the junit5 platform launcher api to discover tests from a queue?

I'm looking to distribute tests to multiple instances of the junit5 standalone console whereby each instance reads off of a queue. Each instance of the runner would use the same test.jar on the classpath, so I'm not trying to distribute the byte code of the actual tests here, just the names of the tests / filter pattern strings.
From the junit 5 advanced topics doc, I think the appropriate place to extend junit 5 to do this is using the platform launcher api. I cobbled this snippet together largely with the sample code in the guide. I think this is what I need to write but I'm concerned I'm oversimplifying the effort involved here:
// keep pulling test classes off the queue until its empty
while(myTestQueue.isNotEmpty()) {
String classFromQueue = myTestQueue.next(); //returns "org.myorg.foo.fooTests"
LauncherDiscoveryRequest request = LauncherDiscoveryRequestBuilder.request()
.selectors(selectClass(classFromQueue)).build();
SummaryGeneratingListener listener = new SummaryGeneratingListener();
try (LauncherSession session = LauncherFactory.openSession()) {
Launcher launcher = session.getLauncher();
launcher.registerTestExecutionListeners(listener);
TestPlan testPlan = launcher.discover(request);
launcher.execute(testPlan);
}
TestExecutionSummary summary = listener.getSummary();
addSummary(summary);
}
Questions:
Will repeatedly discovering and executing in a while loop violate the normal test lifecycle? I'm a little fuzzy on whether discovery is a one time thing that's supposed to happen before all executions.
If I assume that it's ok to repeatedly discover then execute, I see the HierarchicalTestEngine may be an even better place to read from a queue since that seems to be used for implementing parallel execution. Is this more suitable for my use case? Would the implementation be essentially the same as what I have above except maybe I wouldn't need to handle accumulating test summaries?
Approaches I do not want to take:
I am not looking to use the new features of junit 5 aimed at parallelizing test execution within the same jvm. I'm also not looking to divide the tests or classes up ahead of time; starting each console runnner instance with a pre-determined subset of tests.
Short Answer
The code posted in the question (loosely) illustrates a valid approach. There is no need to create a custom engine. Leveraging the platform launcher api to repeatedly discover and execute tests does work. I think it's worth highlighting that you do not have to extend junit5 This isn't executed through an extension that you need to register as I'd originally assumed. You're just simply leveraging the platform launcher api to discover and execute tests.
Long Answer
Here is some sample code with a simple queue of tests class names that exist on the class path. While the queue is not empty, an instance of the testNode class will discover and execute each of the three test classes and write a LegacyXmlReport.
TestNode Code:
package org.sample.node;
import org.junit.platform.launcher.Launcher;
import org.junit.platform.launcher.LauncherDiscoveryRequest;
import org.junit.platform.launcher.LauncherSession;
import org.junit.platform.launcher.TestPlan;
import org.junit.platform.launcher.core.LauncherConfig;
import org.junit.platform.launcher.core.LauncherDiscoveryRequestBuilder;
import org.junit.platform.launcher.core.LauncherFactory;
import org.junit.platform.launcher.listeners.SummaryGeneratingListener;
import org.junit.platform.reporting.legacy.xml.LegacyXmlReportGeneratingListener;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.PrintWriter;
import java.nio.file.Paths;
import java.util.LinkedList;
import java.util.Queue;
import static org.junit.platform.engine.discovery.DiscoverySelectors.selectClass;
public class TestNode {
public void run() throws FileNotFoundException {
// keep pulling test classes off the queue until its empty
Queue<String> queue = getQueue();
while(!queue.isEmpty()) {
String testClass = queue.poll();
LauncherDiscoveryRequest request = LauncherDiscoveryRequestBuilder.request()
.selectors(selectClass(testClass)).build();
LauncherConfig launcherConfig = LauncherConfig.builder()
.addTestExecutionListeners(new LegacyXmlReportGeneratingListener(Paths.get("target"), new PrintWriter(new FileOutputStream("log.txt"))))
.build();
SummaryGeneratingListener listener = new SummaryGeneratingListener();
try (LauncherSession session = LauncherFactory.openSession(launcherConfig)) {
Launcher launcher = session.getLauncher();
launcher.registerTestExecutionListeners(listener);
TestPlan testPlan = launcher.discover(request);
launcher.execute(testPlan);
}
}
}
private Queue<String> getQueue(){
Queue<String> queue = new LinkedList<>();
queue.add("org.sample.tests.Class1");
queue.add("org.sample.tests.Class2");
queue.add("org.sample.tests.Class3");
return queue;
}
public static void main(String[] args) throws FileNotFoundException {
TestNode node = new TestNode();
node.run();
}
}
Tests executed by TestNode
I'm just showing 1 of the three test classes since they're all the same thing with different class names.
They reside in src/main/java and NOT src/test/java. This is an admittedly weird yet common pattern in maven for packaging tests into a fat jar.
package org.sample.tests;
import org.junit.jupiter.api.Test;
public class Class1 {
#Test
void test1() {
System.out.println("Class1 Test 1");
}
#Test
void test2() {
System.out.println("Class1 Test 2");
}
#Test
void test3() {
System.out.println("Class1 Test 3");
}
}

Can not resolve JavaParser

JavaParser.CallContext and JavaParser.FunctionDeclContext do not seem to be able to resolve. This is modeled after page 139 in the definitive antlr reference.
Am I missing a Lib?
import org.antlr.v4.runtime.*;
import org.antlr.v4.runtime.misc.MultiMap;
import org.antlr.v4.runtime.misc.OrderedHashSet;
import org.antlr.v4.runtime.tree.ParseTree;
import org.antlr.v4.runtime.tree.ParseTreeWalker;
import org.antlr.v4.runtime.ParserRuleContext;
import org.antlr.v4.runtime.tree.*;
import org.stringtemplate.v4.ST;
import java.io.FileInputStream;
import java.io.InputStream;
import java.util.Set;
static class FunctionListener extends JavaBaseListener {
Graph graph = new Graph();
String currentFunctionName = null;
public void enterFunctionDecl(JavaParser.FunctionDeclContext ctx) {
currentFunctionName = ctx.ID().getText();
graph.nodes.add(currentFunctionName);
}
public void exitCall(JavaParser.CallContext ctx) {
String funcName = ctx.ID().getText();
// map current function to the callee
graph.edge(currentFunctionName, funcName);
}
}
I think I have seen this one, and I dont have enough points to "comment", but let me ask a question first; It looks like you are trying to produce an external version of the AST on page 137, You took the examples and renamed them to the grammar you had generated already. I'm going to assume that the rest of it is working or you would have had a lot more errors than this one ! Is that the goal ? Are you after just the calling methods/classes or are you after the full homogeneous AST ?
This depends on the grammar entry points. That's not as obvious in the book as it seems. You referenced functionDecl, which looks to be an entry in the Cymbol.g4, but doesn't exist in Java.g4 So rather than JavaParser.FunctionDeclContext I suggest JavaParser.classOrInterfaceDeclarationContext. It should should pull up the right method. I will also confess that I don't know what the exitCall would map to. I could use the illumination on that myself.
Were you after the whole AST or only the call graph ? If the whole AST, I think you can use enterEveryRule or ExitEveryRule for that as well, but confirmation would be good.
So start by regenerating your grammar, change your program to reference the rule entry point in the .g4 files, then see if it all works.
Thanks

Why is close() method highlighted as redundant in Intellij, when using CSVPrinter from apcache.commons?

I am using CSVPrinter class from apache.commons.csv, and I am trying to print some lines in a csv file. As I know, we need to call close() method on a FileWriter, after the writing is done. And based on that assumption, I tried to call CSVPrinter.close().However, IntelliJ IDEA warns me that this method is redundant. Besides, examples in https://www.callicoder.com/java-read-write-csv-file-apache-commons-csv/ does not include this method, either. I want to know why is that method redundant and if I just use .flush() everything will be alright?
Here is an example copied from website mentioned above.
import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVPrinter;
import java.io.*;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Arrays;
public class CSVWriter {
private static final String SAMPLE_CSV_FILE = "./sample.csv";
public static void main(String[] args) throws IOException {
try (
BufferedWriter writer = Files.newBufferedWriter(Paths.get(SAMPLE_CSV_FILE));
CSVPrinter csvPrinter = new CSVPrinter(writer, CSVFormat.DEFAULT
.withHeader("ID", "Name", "Designation", "Company"));
) {
csvPrinter.printRecord("1", "Sundar Pichai ♥", "CEO", "Google");
csvPrinter.printRecord("2", "Satya Nadella", "CEO", "Microsoft");
csvPrinter.printRecord("3", "Tim cook", "CEO", "Apple");
csvPrinter.printRecord(Arrays.asList("4", "Mark Zuckerberg", "CEO", "Facebook"));
csvPrinter.flush();
// I added the following line
csvPrinter.close();
}
}
}
As #user207421 explained in the comments.
First: The try-with-resources statement provides an automatic close at the end of its scope.
Second: Flush is redundant before close.

Cucumber JVM cannot find the steps defined

When I run to run this test with a feature it complains it cannot find the steps. I have tried defining them in Java 8 style, and Java 7 style and using IntelliJ to generate the steps into a MyStepdefs class but it cannot find them.
I am using version 1.2.4 of cucumber-java8 and cucumber-junit.
import cucumber.api.CucumberOptions;
import cucumber.api.DataTable;
import cucumber.api.PendingException;
import cucumber.api.java8.En;
import cucumber.api.junit.Cucumber;
import org.junit.runner.RunWith;
#RunWith(Cucumber.class)
#CucumberOptions(
monochrome = true,
glue = {"com.mycom.core.agg.RunCukesTest"})
public class RunCukesTest implements En {
public RunCukesTest() {
Given("^I have PriceLevels$", (DataTable arg1) -> {
});
And("^I have a TradeRequest$", (DataTable arg1) -> {
});
Then("^I should get these LegRequests$", (DataTable arg1) -> {
});
}
}
running the test prints
Running com.mycom.core.agg.RunCukesTest
1 Scenarios (1 undefined)
3 Steps (3 undefined)
0m0.000s
You can implement missing steps with the snippets below:
Given("^I have PriceLevels$", (DataTable arg1) -> {
.. rest deleted ...
Running the feature file from IntelliJ give much the same error.
While we couldn't figure out what the cause was, create a new maven project with a minimum of dependencies "fixed" the problem. We didn't need to change the code.

Adobe Flex - mxmlc.exe and compile with dependent libraries

I have Flex SDK 4.6. I would like to work with library https://github.com/y8/websocket-as. This library needed library as3corelib (https://github.com/mikechambers/as3corelib). The problem is, that I'm not able to use/include this libraries in my HelloWorld project.
package {
import flash.display.Sprite;
import flash.text.TextField;
import y8.net.WebSocket;
public class HelloWorld extends Sprite {
public function HelloWorld() {
var display_txt:TextField = new TextField();
display_txt.text = "Hello World!";
addChild(display_txt);
}
}
}
When I compile it with mxmlc.exe -o HelloWorld.swf tuts\HelloWorld.as, I get error
HelloWorld.as(5): col: 15 Error: Definition y8.net:WebSocket could not be found.
import y8.net.WebSocket;
Please, how can I put it together (directories) and how can I compile this libraries and HelloWordl with mxmlc.exe. Thanks.
You can do it like this
mxmlc -library-path+=/myLibraries/MyRotateEffect.swc;/myLibraries/MyButtonSwc.swc c:/myFiles/app.mxml
ref: http://livedocs.adobe.com/flex/3/html/help.html?content=apparch_08.html