Cannot find symbol: createStatement() - sql

Why I'm getting error "cannot find symbol: createStatement()"? My goal is to create a t1 Tab object in a TestClass and calls t1.Cr() with a String parameter.
I'm not forgetting the import java.sql.*;
public void Cr() throws Exception {
try {
ConexaoPlus con = new ConexaoPlus();
con.conect();
Statement st = con.createStatement();
st.execute("CREATE TABLE " + getName() + "();");
st.execute("DROP TABLE " + getName() + "();");
st.close();
} catch (Exception e) {
e.printStackTrace();
}
}
Thanks in advance!

A cannot find symbol error may occur for many reasons. You can read this post.
From what I see from the code you have provided, your class ConexaoPlus does not contain the method createStatement() or you might have not defined it.
java sql packages will have these methods already defined which you can use. For more help you will need to share the class ConexaoPlus.

This worked for me.
import java.sql.Statement;

Related

Why is close() method highlighted as redundant in Intellij, when using CSVPrinter from apcache.commons?

I am using CSVPrinter class from apache.commons.csv, and I am trying to print some lines in a csv file. As I know, we need to call close() method on a FileWriter, after the writing is done. And based on that assumption, I tried to call CSVPrinter.close().However, IntelliJ IDEA warns me that this method is redundant. Besides, examples in https://www.callicoder.com/java-read-write-csv-file-apache-commons-csv/ does not include this method, either. I want to know why is that method redundant and if I just use .flush() everything will be alright?
Here is an example copied from website mentioned above.
import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVPrinter;
import java.io.*;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Arrays;
public class CSVWriter {
private static final String SAMPLE_CSV_FILE = "./sample.csv";
public static void main(String[] args) throws IOException {
try (
BufferedWriter writer = Files.newBufferedWriter(Paths.get(SAMPLE_CSV_FILE));
CSVPrinter csvPrinter = new CSVPrinter(writer, CSVFormat.DEFAULT
.withHeader("ID", "Name", "Designation", "Company"));
) {
csvPrinter.printRecord("1", "Sundar Pichai ♥", "CEO", "Google");
csvPrinter.printRecord("2", "Satya Nadella", "CEO", "Microsoft");
csvPrinter.printRecord("3", "Tim cook", "CEO", "Apple");
csvPrinter.printRecord(Arrays.asList("4", "Mark Zuckerberg", "CEO", "Facebook"));
csvPrinter.flush();
// I added the following line
csvPrinter.close();
}
}
}
As #user207421 explained in the comments.
First: The try-with-resources statement provides an automatic close at the end of its scope.
Second: Flush is redundant before close.

How to design custom action method in Selenium Grid architecture

I have a very basic question related to how to design method for executing selenium GRID.
In the current implementation of selenium framework in my project, we have created an action class which includes all selenium WebElelement actions in a static format.
For sequential script execution, there is no issue. But for parallel script execution, I heard that we can't design a method as static as only one copy will be created. Then, how to write custom action method which we can use in other classes.
Could you please advise on this.
Current Implementation:
public class ActionUtil{
public static void selectByVisibleText(WebElement element, String visibleText, String elementName)
{
try {
Select oSelect = new Select(element);
oSelect.selectByVisibleText(text);
log.info(text + " text is selected on " + elementName);
} catch (Exception e) {
log.error("selectByVisibleText action failed.Exception occured :" + e.toString());
}
}
}
Use of 'selectByVisibleText' static method in other page classes:
public void selectMemorableQuestion1(String question) {
ActionUtil.selectByVisibleText(memorableQuestion1, question, "memorableQuestion1");
}
If You're trying to make parallel test run, and use methods in that way avoid static methods.
You need to add the synchronized modifier if you are working with objects that require concurrent access.
You may have concurrency issue (and being in non thread-safe situation) as soon as the code of your static method modify static variables.
So bottom line use synchronized modifier, avoid using static modifier due to thread-safe issues.
public class ActionUtil{
public synchronized void selectByVisibleText(WebElement element, String visibleText, String elementName)
{
try {
Select oSelect = new Select(element);
oSelect.selectByVisibleText(text);
log.info(text + " text is selected on " + elementName);
} catch (Exception e) {
log.error("selectByVisibleText action failed.Exception occured :" + e.toString());
}
}
so call would be:
ActionUtil.selectByVisibleText(...);

Bitwise operations in Apache Pig?

I'm looking at the reference manual and can't find any documentation of bitwise operations/functions.
Is there any way to use, for example, a bitwise AND operation (equivalent to "A & B" in Hive) in a Pig script?
You can provide custom UDF for this. eg. see https://pig.apache.org/docs/r0.7.0/udf.html
In pig script you would do
REGISTER myudfs.jar;
And example for BinaryAND UDF:
package myudfs;
import java.io.IOException;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
import org.apache.pig.impl.util.WrappedIOException;
public class BitwiseAND extends EvalFunc (Integer)
{
public String exec(Tuple input) throws IOException {
// check input tuple:
if (input == null || input.size() < 2)
return null;
try{
return (Integer)input.get(0) & (Integer)input.get(1);
}catch(Exception e){
throw WrappedIOException.wrap("Caught exception processing input row ", e);
}
}
}
NOTE: This is not tested, it's just copied from the pig udf page.

using pdfbox with src code and not jar

I am trying to edit some classes of pdfbox with private data members.So I copied the org folder and pasted it in my src folder . Now when I am creating an object of PdfTextStripper class I am getting an error named "java.lang.ExceptionInInitializerError"
This is part of the code inside PdfTextStripper class where exception is happening
static
{
String path = "org/apache/pdfbox/resources/text/BidiMirroring.txt";
InputStream input = PDFTextStripper.class.getClassLoader().getResourceAsStream(path);
try
{
parseBidiFile(input);
}
catch (IOException e)
{
LOG.warn("Could not parse BidiMirroring.txt, mirroring char map will be empty: "
+ e.getMessage());
}
finally
{
try
{
input.close();// error is in this line
}
catch (IOException e)
{
LOG.error("Could not close BidiMirroring.txt ", e);
}
}
};
So compiler points to this line as an error.
Why is this exception happening. When I use jar file then I dont get any exception so why am i getting one now? How to solve this?
If any any one faces similar problem, just copy the resources folder too. It worked for me.

Hive Udf exception

I have a simple hive UDF:
package com.matthewrathbone.example;
import org.apache.hadoop.hive.ql.exec.Description;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;
#Description(
name="SimpleUDFExample",
value="returns 'hello x', where x is whatever you give it (STRING)",
extended="SELECT simpleudfexample('world') from foo limit 1;"
)
class SimpleUDFExample extends UDF {
public Text evaluate(Text input) {
if(input == null) return null;
return new Text("Hello " + input.toString());
}
}
When I am executing it using select query :
select helloUdf(method) from tests3atable limit 10;
method is the name of column in tests3atable table.
I am getting below exception :
FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments 'method': Unable to instantiate UDF implementation class com.matthewrathbone.example.SimpleUDFExample: java.lang.IllegalAccessException: Class org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge can not access a member of class com.matthewrathbone.example.SimpleUDFExample with modifiers ""
Declare the class as public, it should work
I also had the same issue. It turned out, eclipse was not refreshing the program which i modified. So please make sure that the modifications you make in your code gets reflected in the jar.