I have a machine on which I am storing some data in the form of json or csv.
and on same machine Apache Drill is also running.
I can access the Apache Drill using web console on different machine. and also can execute the sql query on the file that store on the machine where Apache Drill is running.
Now I want to create a program that can execute sql query as I am doing on web browser in web console of Apache Drill.
Can anyone know about the jar like webhdfs-java-client-0.0.2.jar used for Hadoop hdfs?
I am looking for such a java client jar for Apache Drill.
You can achieve this using drill-jdbc driver. Check Drill's documentation.
If you are using maven, add this dependency:
<dependency>
<groupId>org.apache.drill.exec</groupId>
<artifactId>drill-jdbc</artifactId>
<version>1.4.0</version>
</dependency>
Sample code (assuming drill running on xx.xx.xx.xx):
Class.forName("org.apache.drill.jdbc.Driver");
Connection connection =DriverManager.getConnection("jdbc:drill:drillbit=xx.xx.xx.xx");
Statement st = connection.createStatement();
ResultSet rs = st.executeQuery(<your SQL query>);
while(rs.next()){
System.out.println(rs.getString(1));
}
If you want zookeeper to start drill automatically, use:
Connection connection =DriverManager.getConnection("jdbc:drill:zk=xx.xx.xx.xx");
Note: Here xx.xx.xx.xx can be IP address or host name.
Edit: Check my github project for more details.
Related
I am using ignite 2.8.1 and trying to see table definition from ignite web console by using command like desc table_name. But it does not work. Did a detail study but did not find any commands or any approach which helps to download table creation script or see the table definition.
Please let me know if there is any approach by which we can download table script or see table definition in ignite (preferably from ignite web console)
I'm not sure if WebConsole can do the trick, this product is not supported anymore. But you can achieve it using any DB manager with JDBC support.
For example, here is the screenshot from DBeaber that has an embedded template for Apache Ignite JDBC drier. Check this example on how to set it up (could be outdated though).
.
I created a simple project using ATG 10.2 .I want to know how to deploy it in weblogic. Please provide detailed procedure with screenshots,if possible.
To provide a 'detailed' procedure is beyond the scope of what StackOverflow is trying to provide. That said, if you have an understanding of the Weblogic Management Console you should be able to follow these steps to setup your initial deployment:
Create a Server
1.1 Specify a server name (eg. commerce) and the port number this server will run on (eg. 8180). Select it as a 'Stand-alone server'.
1.2 Once created go to Configuration > Server Start for the newly created server and modify the 'Arguments' block and include the following setings (assuming you are running windows, for Unix update your own paths)
-Datg.dynamo.data-dir=c:\ATG-Data -Datg.dynamo.server.name=commerce -d64 -XX:ParallelGCThreads=8 -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Xms1152m -Xmx2048m -XX:NewSize=128m -XX:MaxNewSize=256m -XX:PermSize=128m -XX:MaxPermSize=256m
1.3 Save your Server
Create DataSources
2.1 In the Console click on 'Services > Data Sources'
2.2 Create 'New' datasources for each of your connections. As a minimum you will need connections for ATGSwitchingDS_A, ATGSwitchingDS_B (Assuming you are doing switching datasources) and ATGProductionDS. These names should match your JNDI names in your property files. Remember to specify the 'commerce' server as the target for each of the datasources.
Create Deployment
3.1 Assuming you've already built your EAR (eg. ATGProduction.ear) and it is available in c:\deployments you need to create a deployment in Weblogic. You need to create the deployment in the console and specify the target as 'commerce'. Once done you need to also 'start serving requests' on the deployment.
Start Server
You should now be able to see your server running on port 8180 with the log files being written to c:\ATG-Data\servers\commerce\logs.
If after this things aren't running, post specific questions about your issues and someone here might be able to help you.
I am trying to connect to PL/SQL 8.0.4.1514 through JMeter.
In JDBC connection configuration ,I have provided the database URL as "jdbc:oracle:thin:#//01HW552780:6129))/tnsfile" and JDBC driver class as "com.plsql.jdbc.Driver"
But getting error as "No suitable driver found for jdbc:oracle:thin:#//01HW552780:6129))/tnsfile"
Could someone rectify me here regarding driver class?
I believe that you need oracle.jdbc.OracleDriver class instead.
I believe that you need to remove // from your JDBC URL
I'm not too sure regarding tnsfile (unless it is you real oracle database name) as Oracle JDBC URL takes forms:
jdbc:oracle:thin:#host:port /databaseName
jdbc:oracle:thin:#host:port :serviceName
Relevant driver can be downloaded from Oracle website or alternatively (better) take it from $ORACLE_HOME/jdbc/ folder on the machine where Oracle lives
See The Real Secret to Building a Database Test Plan With JMeter guide for more configuration and usage details for the JDBC Sampler.
Faced the same problem.
I used oracle.jdbc.OracleDriver as Driver Class.
No need to remove //. It can be present.
In database URL, it should be jdbc:oracle:thin:#//MachineName:Port/Schema
Download or better copy JDBC Jars from your DB installation and paste it under Jmeter/lib path.
For example, I presume you DB as Oracle 12c, it required ojdbc6.jar and ojdbc7.jar copied and pasted under Jmeter/lib folder.
After this activity, the problem vanished away for me.
I am porting a suite of related applications from WebLogic to JBoss EAP v6.2.
I have set up a data source connection using the JBoss command line interface and hooked it to an oracle database. This database has a name of "mydatasource" and a JNDI name of
"java:jboss/datasources/mydatasource" as per JBoss standards. I can test and validate this database connection.
However, when I try to port the code and run it, the connection doesn't work. The code that worked in WebLogic was simply:
InitialContext ic = new InitialContext() ;
DataSource ds = (DataSource)ic.lookup(dataSource) ;
with a value in dataSource of "mydatasource".
This worked in Web Logic but in JBoss it throws a NameNotFoundException
javax.naming.NameNotFoundException: mydatasource-- service jboss.naming.context.java.mydatasource
Clearly there is a difference in how the InitialContext is set up between the two servers.
But this port involves a large number of small applications, all of which connect to the datasource via code like that above. I don't want to rewrite all that code.
Is there a way through configuration (InitialContextFactory, maybe) to define the initial context such that code like that above will work without rewriting, or perhaps is there another way of naming the datasource that JBoss will accept that would allow code like that above to work without rewriting?
Or must we bite the bullet and accept that this code needs a rewrite?
Update: Yes, I know that simply passing "java:jboss/datasources/mydatasource" to the InitialContext lookup solves the problem, but I am looking for a solution via configuration, rather than via coding if there is such a solution.
The way to do this correctly through configuration is to use
java:comp/env/jdbc/myDataSource
then use resource-ref in web.xml to map it to the declare datasource and use weblogic.xml or jboss-web.xml to actually map it to the real one
in weblogic admin console, when you define datasource it can be jdbc/realDataSource
JNDI path Tomcat vs. Jboss
For weblogic http://docs.oracle.com/cd/E13222_01/wls/docs103/jdbc_admin/packagedjdbc.html
I want to know how we can configure Liberty Profile 8.5.5 (Dev version not the WAS ND version) to be load balanced by apache http server.
I have tried to search but havent been able to come across any useful links. Any help will be much appreciated.
Thanks,
Vishalendu
Currently, you'll have to generate a plugin-cfg.xml from each liberty server (the license has info about how many servers you can aggregate in this way for load balancing and failover) and merge the result to make it appear like a cluster to the WAS Plugin.
Other editions provide a merge tool, if you have access to them.
The WAS plugin installation has an XSD file for the plugin-cfg.xml.
1) note the http and https transports in both plugin configurations
2) make a copy of one of the XML's to edit
3) Find the <ServerCluster
<Config...
<ServerCluster CloneSeparatorChange="false" GetDWLMTable="false" IgnoreAffinityRequests="true" LoadBalance="Round Robin" Name="cluster1" PostBufferSize="64" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60" ServerIOTimeoutRetry="-1">
<!-- copy generated Server stanza for your other XML -->
<Server ...
<PrimaryServers>
<!-- add a 2nd primary server, from your other XML -->
<Server Name="node1_serv1"/>
...
</PrimaryServers>
</ServerCluster>
4) Copy the stanzas from the other file inside the ServerCluster
5) Add the servers name to the field
If your servers have the same apps on them, you're done. Otherwise, you have to merge the other elements (Route, URIGroup, etc) but usually they'll be the same.