how to jdbc to hive using java on ubuntu? - hive

Ubuntu 16.04.1 LTS
hadoop 3.3.1
Hive 2.3.9
I have a java file:
public class HiveCreateDb {
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
public static void main(String[] args) throws SQLException {
// Register driver and create driver instance
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.exit(1);
}
// get connection
Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/default", "", "");
Statement stmt = con.createStatement();
stmt.executeQuery("CREATE DATABASE userdb");
System.out.println("Database userdb created successfully.");
con.close();
}
}
I put this java file on ubuntu folder,and then run
javac HiveCreateDb.java
HiveCreateDb.java:14: error: unreported exception ClassNotFoundException; must be caught or declared to be thrown
Class.forName(driverName);
^
1 error
I have downloaded hive-jdbc-3.1.2.jar,where shall I put this jar?

javac -classpath jars/hive-jdbc-2.3.9.jar source/HiveCreateDb.java

Related

Unable to cleanup Infinispan DefaultCacheManager in state FAILED

I am getting this Exception when trying to restart CacheManager, that failed to start.
Caused by: org.infinispan.jmx.JmxDomainConflictException: ISPN000034: There's already a JMX MBean instance type=CacheManager,name="DefaultCacheManager" already registered under 'org.infinispan' JMX domain. If you want to allow multiple instances configured with same JMX domain enable 'allowDuplicateDomains' attribute in 'globalJmxStatistics' config element
at org.infinispan.jmx.JmxUtil.buildJmxDomain(JmxUtil.java:53)
I think it's a bug, but am I correct?
The version used is 9.0.0.Final.
EDIT
The error can be seen using this code snippet.
import org.infinispan.configuration.cache.*;
import org.infinispan.configuration.global.*;
import org.infinispan.manager.*;
class Main {
public static void main(String[] args) {
System.out.println("Starting");
GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder();
global.transport()
.clusterName("discover-service-poc")
.initialClusterSize(3);
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.clustering().cacheMode(CacheMode.REPL_SYNC);
DefaultCacheManager cacheManager = new DefaultCacheManager(global.build(), builder.build(), false);
try {
System.out.println("Starting cacheManger first time.");
cacheManager.start();
} catch (Exception e) {
e.printStackTrace();
cacheManager.stop();
}
try {
System.out.println("Starting cacheManger second time.");
System.out.println("startAllowed: " + cacheManager.getStatus().startAllowed());
cacheManager.start();
System.out.println("Nothing happening because in failed state");
System.out.println("startAllowed: " + cacheManager.getStatus().startAllowed());
} catch (Exception e) {
e.printStackTrace();
cacheManager.stop();
}
cacheManager = new DefaultCacheManager(global.build(), builder.build(), false);
cacheManager.start();
}
}

Passing java.security.auth.login.config to Mobilefirst Patform Server

How can we pass following parameter to Mobilefirst Development Server?
-Djava.security.auth.login.config=login.config
I have tried adding it to jvm.options file, and it seems it is passed as parameter without effect.
Following is the code I am trying to execute, and sample of login.config file.
Java code to execute in login module or adapter.
LoginContext context = new LoginContext("SampleClient", new CallbackHandler() {
#Override
public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException {
NameCallback callBack = (NameCallback) callbacks[0];
callBack.setName("EXAMPLE.COM");
}
});
login.config
SampleClient {
com.sun.security.auth.module.Krb5LoginModule required
default_realm=EXAMPLE.COM;
};
Adding following code before login worked.
try {
Configuration config = Configuration.getConfiguration();
config.getAppConfigurationEntry("SampleClient");
URIParameter uriParameter = new URIParameter(new java.net.URI("file:///path_to_your_file/login.conf"));
Configuration instance = Configuration.getInstance("JavaLoginConfig", uriParameter);
Configuration.setConfiguration(instance);
} catch (URISyntaxException e) {
e.printStackTrace();
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
}

UserPrivilegedException, while creating a directory in HDFS as a specific user, through JAVA

I a'm using CDH 4.7. I am trying to create a folder in HDFS in /user/cloudera. But the UserPrivilegedException is thrown. Below is my code (#copied)
public static void main(String args[]) {
try {
UserGroupInformation ugi =
UserGroupInformation.createProxyUser("cloudera", UserGroupInformation.getLoginUser());
System.out.println(ugi.getUserName());
ugi.doAs(new PrivilegedExceptionAction<Void>() {
public Void run() throws Exception {
System.out.println("aaa");
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://localhost.localdomain:8020/user/cloudera");
conf.set("hadoop.job.ugi", "cloudera");
FileSystem fs = FileSystem.get(conf);
fs.createNewFile(new Path("/user/cloudera/test"));
Path path = new Path("/user/cloudera/Hbasesyntax.txt");
FileStatus[] status = fs.listStatus(path);
for(int i=0;i<status.length;i++){
System.out.println(status[i].getPath());
}
return null;
}
});
} catch (Exception e) {
e.printStackTrace();
}
}
Then, I even tried accessing the file (Hbasesyntax.txt) which I -put through the terminal. But, not able to get the file information from that too. Am I missing anything.
Below is the exception thrown:
Feb 23, 2015 1:18:27 AM org.apache.hadoop.security.UserGroupInformation doAs
SEVERE: **PriviledgedActionException as:cloudera via cloudera cause:java.io.IOException: Mkdirs failed to create /user/cloudera
java.io.IOException: Mkdirs failed to create /user/cloudera
at** org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:378)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:364)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:564)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:545)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:507)
at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:647)
at HDFSFileOperations.GetIntoFS$1.run(GetIntoFS.java:32)
at HDFSFileOperations.GetIntoFS$1.run(GetIntoFS.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1 190)
at HDFSFileOperations.GetIntoFS.main(GetIntoFS.java:21)

How to set a connection from eclipse to SQLServer?

I am trying to connect to SQL Server from eclipse and i get the following error. I mention that i verified and the SQL Server Browser is running on the host and i have no firewall active.
com.microsoft.sqlserver.jdbc.SQLServerException: The connection to the host
LAURA-PC, named instance sqlexpress failed. Error: "java.net.SocketTimeoutException:
Receive timed out". Verify the server and instance names and check that no firewall
is blocking UDP traffic to port 1434. For SQL Server 2005 or later, verify that
the SQL Server Browser Service is running on the host.
This is the code i've written:
import java.sql.*;
public class ConnectSQLServer {
public void connect(String url){
try {
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver").newInstance();
Connection connection = DriverManager.getConnection(url);
System.out.println("Connected");
Statement statement = connection.createStatement();
String query = "select * from Vehicle where Mileage < 50000";
ResultSet rs = statement.executeQuery(query);
while(rs.next()){
System.out.println(rs.getString(1));
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public static void main(String[] args) {
ConnectSQLServer connServer = new ConnectSQLServer();
String url = "jdbc:sqlserver://LAURA-PC\\SQLEXPRESS;databaseName=Register;integratedSecurity=true";
connServer.connect(url);
}
}
First thing before DB programming. Test each "step". Before executing any query or even writing any other code, please check if you can connect to the DB. I assume that you are connecting to the local db. For steps on making your connection URL, see - http://technet.microsoft.com/en-us/library/ms378428.aspx
Try changing your URL to - jdbc:sqlserver://localhost;user=Mine;password=Secret;databaseName=MyDB. I tried this code and it worked.
import java.sql.*;
public class ConnectSQLServer {
public void connect(String url){
try {
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver").newInstance();
Connection connection = DriverManager.getConnection(url);
System.out.println("Connected");
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public static void main(String[] args) {
ConnectSQLServer connServer = new ConnectSQLServer();
String url = "jdbc:sqlserver://localhost;user=Mine;password=Secret;databaseName=AdventureWorks";
connServer.connect(url);
}
}
Just ENABLE/START SQL Server Browser and copy sqljdbc_auth.dll file to Windows->System32 by extracting this file from ( Microsoft jdbc driver 9.2 for sql server )

Hadoop RPC server doesn't stop

I was trying to create a simple parent child process with IPC between them using Hadoop IPC. It turns out that program executes and prints the results but it doesn't exit. Here is the code for it.
interface Protocol extends VersionedProtocol{
public static final long versionID = 1L;
IntWritable getInput();
}
public final class JavaProcess implements Protocol{
Server server;
public JavaProcess() {
String rpcAddr = "localhost";
int rpcPort = 8989;
Configuration conf = new Configuration();
try {
server = RPC.getServer(this, rpcAddr, rpcPort, conf);
server.start();
} catch (IOException e) {
e.printStackTrace();
}
}
public int exec(Class klass) throws IOException,InterruptedException {
String javaHome = System.getProperty("java.home");
String javaBin = javaHome +
File.separator + "bin" +
File.separator + "java";
String classpath = System.getProperty("java.class.path");
String className = klass.getCanonicalName();
ProcessBuilder builder = new ProcessBuilder(
javaBin, "-cp", classpath, className);
Process process = builder.start();
int exit_code = process.waitFor();
server.stop();
System.out.println("completed process");
return exit_code;
}
public static void main(String...args) throws IOException, InterruptedException{
int status = new JavaProcess().exec(JavaProcessChild.class);
System.out.println(status);
}
#Override
public IntWritable getInput() {
return new IntWritable(10);
}
#Override
public long getProtocolVersion(String paramString, long paramLong)
throws IOException {
return Protocol.versionID;
}
}
Here is the child process class. However I have realized that it is due to RPC.getServer() on the server side that it the culprit. Is it some known hadoop bug, or I am missing something?
public class JavaProcessChild{
public static void main(String...args){
Protocol umbilical = null;
try {
Configuration defaultConf = new Configuration();
InetSocketAddress addr = new InetSocketAddress("localhost", 8989);
umbilical = (Protocol) RPC.waitForProxy(Protocol.class, Protocol.versionID,
addr, defaultConf);
IntWritable input = umbilical.getInput();
JavaProcessChild my = new JavaProcessChild();
if(input!=null && input.equals(new IntWritable(10))){
Thread.sleep(10000);
}
else{
Thread.sleep(1000);
}
} catch (Throwable e) {
e.printStackTrace();
} finally{
if(umbilical != null){
RPC.stopProxy(umbilical);
}
}
}
}
We sorted that out via mail. But I just want to give my two cents here for the public:
So the thread that is not dying there (thus not letting the main thread finish) is the org.apache.hadoop.ipc.Server$Reader.
The reason is, that the implementation of readSelector.select(); is not interruptable. If you look closely in a debugger or threaddump, it is waiting on that call forever, even if the main thread is already cleaned up.
Two possible fixes:
make the reader thread a deamon (not so cool, because the selector
won't be cleaned up properly, but the process will end)
explicitly close the "readSelector" from outside when interrupting the threadpool
However, this is a bug in Hadoop and I have no time to look through the JIRAs. Maybe this is already fixed, in YARN the old IPC is replaced by protobuf and thrift anyways.
BTW also this is platform dependend on the implementation of the selectors, I observed these zombies on debian/windows systems, but not on redhat/solaris.
If anyone is interested in a patch for Hadoop 1.0, email me. I will sort out the JIRA bug in the near future and edit this here with more information. (Maybe this is fixed in the meanwhile anyways).