Ignite: problems SELECTing using PHP PDO - ignite

Accessing the Ignite cluster using PHP PDO.
1) Created a table using a PHP PDO script.
The resulting cache is visible in Ignite Web Console.
SQL SELECTs/INSERTs can be issued from the Ignite Web Console.
SQL INSERTs can be issued using a standalone PHP PDO.
So the SQL table/cache appears to be fully functional, and yet:
2) SELECT from inside a PHP PDO script fails.
The PHP PDO script is essentially the same as the sample script given on the Ignite site.
<?php
try {
$dbh=new PDO('odbc:ApacheIgniteDSN');
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$res=$dbh->query('SELECT id from Person');
// no errors up to here
//exit;
if ($res == FALSE)
print_r("Exception");
// the following results in errors
foreach($res as $row) {
print_r($row);
}
}
catch (PDOException $e) {
print "Error: " . $e->getMessage() . "\n";
exit;
}*
When run from the command line, it generates:
Error: SQLSTATE[HYC00]: Optional feature not implemented: 0 Specified
attribute is not supported. (SQLFetchScroll[0] at
ext\pdo_odbc\odbc_stmt.c:543)
This is not very helpful, but on the Ignite node the following is logged:
[17:00:30,074][SEVERE][grid-nio-worker-client-listener-0-#30][ClientListenerProcessor] Failed to process selector key [ses=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192], super=AbstractNioClientWorker [idx=0, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-client-listener-0, igniteInstanceName=null, finished=false, hashCode=1314397987, interrupted=false, runner=grid-nio-worker-client-listener-0-#30]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/100.96.3.26:10805, rmtAddr=/100.96.3.1:6733, createTime=1523811628969, closeTime=0, bytesSent=69, bytesRcvd=75, bytesSent0=69, bytesRcvd0=75, sndSchedTime=1523811629031, lastSndTime=1523811629031, lastRcvTime=1523811629020, readsPaused=false, filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter, GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false]], accepted=true]]]
java.io.IOException: Connection reset by peer
Please note that I created the SQL table using CREATE TABLE command (from a PHP PDO script), rather than explicitly specifying it in the cache using queryEntities. However, it is seen correctly by the Web Console and you can query against it there, so one would assume that a SELECT from PDO would also work, but it doesn't.

This is a known issue, which is caused by MS cursor library, which is apparently used by the PDO in some cases. It has been fixed and patch has been merged to master as Ignite's Jira states, so you can either wait for 2.5 release, or use master branch code instead.

Related

Detected failed migration to version Flyway

I added a sql file to my project and now I am receiving the following error:
nested exception is org.flywaydb.core.api.FlywayException: Validate failed: Detected failed migration to version 1.0.9 (update)
Here is my sql file I'm adding:
ALTER TABLE `episodes`
ADD COLUMN `rating` TINYINT(1) NULL DEFAULT NULL;
I added the same query to MySQL Workbench and it's working fine, so I think the error here is beyond the sql file added
UPDATE: full stack trace below
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Invocation of init method failed; nested exception is org.flywaydb.core.api.FlywayException: Validate failed: Detected failed migration to version 1.0.9 (update)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1708)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:581)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:503)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:304)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1089)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:859)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:395)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:327)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1255)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1243)
at com.nbcuni.cds.Application.main(Application.java:12)
Caused by: org.flywaydb.core.api.FlywayException: Validate failed: Detected failed migration to version 1.0.9 (update)
at org.flywaydb.core.Flyway.doValidate(Flyway.java:1286)
at org.flywaydb.core.Flyway.access$100(Flyway.java:71)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:1176)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:1168)
at org.flywaydb.core.Flyway.execute(Flyway.java:1655)
at org.flywaydb.core.Flyway.migrate(Flyway.java:1168)
at org.springframework.boot.autoconfigure.flyway.FlywayMigrationInitializer.afterPropertiesSet(FlywayMigrationInitializer.java:66)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1767)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1704)
... 18 common frames omitted
UPDATE: When I run mvn:flyway-validate, I receive the following error:
org.flywaydb.core.api.FlywayException: Unable to connect to the database. Configure the url, user and password!
I'm not sure where to configure it? It's already set in my applications.properties file? And w/o this version the spring application is working fine
I guess the problem is the error in migration history saved on your database.
Try deleting the line that failed and try again. If the error won't vanish, the problem is somehow in your statement.
I just had the same issue :)
below resolved the validate problem for me:
mvn flyway:validate -Dflyway.configFile=myFlywayConfig.properties
The error happen if there is any record is inserted into the 'flyway_schema_history' with failed status or part of the failed migration might already run. Just need to remove failed recored from the 'flyway_schema_history' then run "flyway migrate" again.
Query to flyway_schema_history table
select * from your_database_name.flyway_schema_history
You will get something like this:
Remove the record with success =0 and run again "flyway migrate".
Note, that this table might have a different name depending on the configuration

zeppelin interpreter error even after giving correct details

getting the below error
and i have given mysql settings in the interpreter:
com.mysql.jdbc.Driver
jdbc:mysql://:3306/
username and password
restarted interpreter and binded it, but still get the error
using commands: use and select commands
enter code herejava.lang.NullPointerException
at org.apache.zeppelin.postgresql.PostgreSqlInterpreter.executeSql(PostgreSqlInterpreter.java:201)
at org.apache.zeppelin.postgresql.PostgreSqlInterpreter.interpret(PostgreSqlInterpreter.java:288)
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:300)
at org.apache.zeppelin.scheduler.Job.run(Job.java:169)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:134)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Adding the jar to $ZEPPELIN_HOME/lib folder like user3921855 didn't work for me.
I got it working adding MYSQL connector to dependency section in the interpreter config (ex: Artifact : mysql:mysql-connector-java:5.1.38)
IMPORTANT :
You need to restart Zeppelin daemon for the interpreter to pick the new jar. Don't know why because it said it had restarted the sub-process. Might be a bug.
Stop / Start Reminder :
$ZEPPELIN_HOME/bin/zeppelin-daemon.sh stop
$ZEPPELIN_HOME/bin/zeppelin-daemon.sh start

Apache Pig : Job in state DEFINE instead of RUNNING

I am using Apache Pig. I am trying to load a comma separated file as a Pig table. It does not throw any error while loading the file.
But when I try to print that table using "dump" command, it gives error.
File I loaded
Error,fdgdf
Error,dfgdf
Error,dfgdf
Info,dfgdf
Info,dfgdf
Info,dfgdf
Info,dfgdf
Info,dfgdf
Info,dfgdf
Debug,dfgdf
Debug,dfgdf
Debug,dfgdf
Debug,dfgdf
Debug,dfgdf
Debug,dfgdf
Command to load
logFile1 = LOAD 'PigTestFile' using PigStorage();
Command to print table
dump logFile1
Error I get
led Jobs:
JobId Alias Feature Message Outputs
job_1454617624671_0152 logFile1 MAP_ONLY Message: org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Input path does not exist: hdfs:
//ip-172-31-53-48.ec2.internal:8020/user/e1681fe26eed362777aabca1682510/PigTestFile
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:279)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://ip-172-31-53-48.ec2.internal:8020/user/e1681fe26eed362777aabca1682510/PigTestFile
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextInputFormat.listStatus(PigTextInputFormat.java:36)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:265)
... 18 more
hdfs://ip-172-31-53-48.ec2.internal:8020/tmp/temp1258481141/tmp-1928081547,
:
:
2016-02-07 06:31:20,100 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2016-02-07 06:31:20,107 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias logFile1. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
[EDIT]
When I closely read the log I found that it is not able to find the file which was used to load the table. It is expecting it to be in HDFS. Where as my file was on local box.
I then moved the file into HDFS and then ran same commands. It worked well.
But then why did it not give error while executing "Load" command itself ??
As explained by Murali in his answer (which I have accepted) Map/ Reduce jobs for a script will get triggered only when STORE/ DUMP is encountered.
Here is more explanation about it from Apache Pig documentation
In general, Pig processes Pig Latin statements as follows:
First, Pig validates the syntax and semantics of all statements.
Next, if Pig encounters a DUMP or STORE, Pig will execute the statements.
In this example Pig will validate, but not execute, the LOAD and FOREACH statements.
A = LOAD 'student' USING PigStorage() AS (name:chararray, age:int, gpa:float);
B = FOREACH A GENERATE name;
In this example, Pig will validate and then execute the LOAD, FOREACH, and DUMP statements.
A = LOAD 'student' USING PigStorage() AS (name:chararray, age:int, gpa:float);
B = FOREACH A GENERATE name;
DUMP B;
(John)
(Mary)
(Bill)
(Joe)
Map/ Reduce jobs for a script will get triggered only when STORE/ DUMP is encountered.
In this case, Map phase for LOAD command will start only when STORE/ DUMP is encountered in script.
Default execution mode is map reduce. If the file is in local path you have use local mode for execution.
pig -x local {pigfilename.pig}
Refer : https://pig.apache.org/docs/r0.9.1/start.html#execution-modes
Extract from above link :
Pig has two execution modes or exectypes:
Local Mode - To run Pig in local mode, you need access to a single
machine; all files are installed and run using your local host and
file system. Specify local mode using the -x flag (pig -x local).
Mapreduce Mode - To run Pig in mapreduce mode, you need access to a
Hadoop cluster and HDFS installation. Mapreduce mode is the default
mode; you can, but don't need to, specify it using the -x flag (pig OR
pig -x mapreduce).

Error on starting weblogic server in jdeveloper

Can anyone please help me to resolve this error?
Below is the weblogic server log...
I'm using Jdk 7, and Java Developer for ADF
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -jrockit
Process exited.
--Weblogic server log-------
*** Using HTTP port 7101 ***
*** Using SSL port 7102 ***
C:\Users\inatar\AppData\Roaming\JDeveloper\system11.1.2.4.39.64.36.1\DefaultDomain\bin\startWebLogic.cmd
[waiting for the server to complete its initialization...]
.
.
JAVA Memory arguments: -Xms256m -Xmx512m
.
WLS Start Mode=Development
.
CLASSPATH=C:\Oracle\Middleware\oracle_common\modules\oracle.jdbc_11.1.1\ojdbc6dms.jar;C:\Oracle\Middl
eware\patch_wls1035\profiles\default\sys_manifest_classpath\weblogic_patch.jar;C:\Oracle\Middleware\p
atch_jdev1112\profiles\default\sys_manifest_classpath\weblogic_patch.jar;C:\PROGRA~1\Java\jdk1.7.0_67
\lib\tools.jar;C:\Oracle\Middleware\wlserver_10.3\server\lib\weblogic_sp.jar;C:\Oracle\Middleware\wls
erver_10.3\server\lib\weblogic.jar;C:\Oracle\Middleware\modules\features\weblogic.server.modules_10.3
.5.0.jar;C:\Oracle\Middleware\wlserver_10.3\server\lib\webservices.jar;C:\Oracle\Middleware\modules\o
rg.apache.ant_1.7.1/lib/ant-all.jar;C:\Oracle\Middleware\modules\net.sf.antcontrib_1.1.0.0_1-
0b2/lib/ant-
contrib.jar;C:\Oracle\Middleware\oracle_common\modules\oracle.jrf_11.1.1\jrf.jar;C:\Oracle\Middleware
\wlserver_10.3\common\derby\lib\derbyclient.jar;C:\Oracle\Middleware\wlserver_10.3\server\lib\xqrl.jar
.
PATH=C:\Oracle\Middleware\patch_wls1035\profiles\default\native;C:\Oracle\Middleware\patch_jdev1112\p
rofiles\default\native;C:\Oracle\Middleware\wlserver_10.3\server\native\win\32;C:\Oracle\Middleware\w
lserver_10.3\server\bin;C:\Oracle\Middleware\modules\org.apache.ant_1.7.1\bin;C:\PROGRA~1\Java\jdk1.7
.0_67\jre\bin;C:\PROGRA~1\Java\jdk1.7.0_67\bin;E:\DB\app\oracle\product\11.2.0\server\bin;;c:\Program
Files\RSA SecurID Token Common;C:\Program Files\Intel\iCLS
Client\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell
\v1.0\;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program
Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program
Files\Java\jdk1.7.0_67\bin;C:\Oracle\Middleware\wlserver_10.3\server\native\win\32\oci920_8
.
***************************************************
* To start WebLogic Server, use a username and *
* password assigned to an admin-level user. For *
* server administration, use the WebLogic Server *
* console at http:\\hostname:port\console *
***************************************************
starting weblogic with Java version:
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -jrockit
Starting WLS with line:
C:\PROGRA~1\Java\jdk1.7.0_67\bin\java -jrockit -Xms256m -Xmx512m -Dweblogic.Name=DefaultServer
-Djava.security.policy=C:\Oracle\Middleware\wlserver_10.3\server\lib\weblogic.policy -
Djavax.net.ssl.trustStore=C:\Users\inatar\AppData\Local\Temp\trustStore8547229804589400188.jks -
Doracle.jdeveloper.adrs=true -Dweblogic.nodemanager.ServiceEnabled=true -Xverify:none -da -
Dplatform.home=C:\Oracle\Middleware\wlserver_10.3 -
Dwls.home=C:\Oracle\Middleware\wlserver_10.3\server -
Dweblogic.home=C:\Oracle\Middleware\wlserver_10.3\server -Djps.app.credential.overwrite.allowed=true
-Dcommon.components.home=C:\Oracle\Middleware\oracle_common -Djrf.version=11.1.1 -
Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -
Ddomain.home=C:\Users\inatar\AppData\Roaming\JDeveloper\system11.1.2.4.39.64.36.1\DefaultDomain -
Djrockit.optfile=C:\Oracle\Middleware\oracle_common\modules\oracle.jrf_11.1.1\jrocket_optfile.txt -
Doracle.server.config.dir=C:\Users\inatar\AppData\Roaming\JDeveloper\system11.1.2.4.39.64.36.1\Defaul
tDomain\config\fmwconfig\servers\DefaultServer -
Doracle.domain.config.dir=C:\Users\inatar\AppData\Roaming\JDeveloper\system11.1.2.4.39.64.36.1\Defaul
tDomain\config\fmwconfig -
Digf.arisidbeans.carmlloc=C:\Users\inatar\AppData\Roaming\JDeveloper\system11.1.2.4.39.64.36.1\Defaul
tDomain\config\fmwconfig\carml -
Digf.arisidstack.home=C:\Users\inatar\AppData\Roaming\JDeveloper\system11.1.2.4.39.64.36.1\DefaultDom
ain\config\fmwconfig\arisidprovider -
Doracle.security.jps.config=C:\Users\inatar\AppData\Roaming\JDeveloper\system11.1.2.4.39.64.36.1\Defa
ultDomain\config\fmwconfig\jps-config.xml -
Doracle.deployed.app.dir=C:\Users\inatar\AppData\Roaming\JDeveloper\system11.1.2.4.39.64.36.1\Default
Domain\servers\DefaultServer\tmp\_WL_user -Doracle.deployed.app.ext=\- -
Dweblogic.alternateTypesDirectory=C:\Oracle\Middleware\oracle_common\modules\oracle.ossoiap_11.1.1,C:
\Oracle\Middleware\oracle_common\modules\oracle.oamprovider_11.1.1 -
Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.jdbc.remoteEnabled=false -
Dwsm.repository.path=C:\Users\inatar\AppData\Roaming\JDeveloper\system11.1.2.4.39.64.36.1\DefaultDoma
in\oracle\store\gmds -Dweblogic.management.discover=true -Dwlw.iterativeDev= - Dwlw.testConsole= -
Dwlw.logErrorsToConsole= -
Dweblogic.ext.dirs=C:\Oracle\Middleware\patch_wls1035\profiles\default\sysext_manifest_classpath;C:\O
racle\Middleware\patch_jdev1112\profiles\default\sysext_manifest_classpath weblogic.Server
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Unrecognized option: -jrockit
Process exited.
Unrecognized option: -jrockit Process exited. is the critical error. -jrockit is not an option available to the standard java jdk.
e.g. C:\PROGRA~1\Java\jdk1.7.0_67\bin\java -jrockit is the issue
Your JAVA_VENDOR is being set incorrectly in your setDomainEnv.cmd or startWebLogic.cmd script. It's being set to Oracle but should be set to Sun. You either need to figure out how that environment variable is being set incorrectly or manually fix one of the two .cmd files.
My last post was bad!
Here is the unwrapped set of lines I added:
echo "==> Force JAVA_VM to use -server:"
JAVA_VM=-server
export JAVA_VM
echo "."
echo "."
echo "JAVA Machine: ${JAVA_VM}"
THIS ISSUE IS RESOLVED FOR ME. I JUST WANTED TO POST THE SOLUTION SOMEWHERE...
I was working through exercise 1 on the NoSQL intro: http://www.oracle.com/technetwork/topics/bigdata/articles/intro-to-oracle-nosql-db-hol-1937059.pdf
I tried to deploy the Movieplex application to JDeveloper's internal weblogic server but kept getting hit with a -jrockit VM not found. This was difficult for me to get fixed because: 1 - I am not a heavy java programmer, and 2 - I apparently do not read very well. As for point 1, JRockit is an optimized special JAVA VM that some folks use. You can look that up oon line I guess, but it looks like we do not need it for this tutorial. As for point 2, I spent hours (I ashamedly admit) trying to figure out what .sh scripts were being run to start the Weblogic server running. I did all sorts of find commands, hunted through the binaries and folders underneath the Middleware folders. In the end, all I has to do was read the output log carefully, the script for me is: /home/oracle/.jdeveloper/system11.1.1.6.38.61.92/DefaultDomain/bin/startWebLogic.sh.
I edited the script REPLACING line 129 with below:
echo "==> Force JAVA_VM to use -server:"
JAVA_VM=-server
export JAVA_VM
echo "."
echo "."
echo "JAVA Machine: ${JAVA_VM}"
NOTE: there is another shell script that may be involved, /home/oracle/.jdeveloper/system11.1.1.6.38.61.92/Default/init-info/Domain

Zookeeper error connection loss exception

I'm running a SeqWare VM on an amazon EC2 instance I'm trying to use the SeqWare query engine to query data from VCF files. When I first launch the instance and follow the instructions to import data, It works fine, and continues to work until I stop the instance. When I restart it. It won't let me import anything, nor create a new workspace. It always returns the error below. I looked at the processes and found that none of the required nodes were running, so I logged into root and went to the etc/init.d directory and start everything again, at which point, when T try to import data, I don't even get an error and I have to stop the process.
[seqware#master target]$ java -classpath seqware-distribution-0.13.6.7-qe-full.jar com.github.seqware.queryengine.system.importers.SOFeatureImporter -i ../../seqware-queryengine/src/test/resources/com/github/seqware/queryengine/system/FeatureImporter/consequences_annotated.vcf ALL.chr3.phase1_release_v3.20101123.snps_indels_svs.genotypes.3_100001-101000.vcf -o keyValueVCF.out -r hg_19 -s c111aea5-5e18-4c62-a8a7-ec82fe151301 -a ad_hoc -w VCFVariantImportWorker
[SeqWare Query Engine] 0 [main] ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - ZooKeeper exists failed after 3 retries
[SeqWare Query Engine] 1 [main] ERROR org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher - hconnection Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:154)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:226)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:580)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:569)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:186)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:100)
at com.github.seqware.queryengine.impl.HBaseStorage.<init>(HBaseStorage.java:89)
at com.github.seqware.queryengine.factory.SWQEFactory$Storage_Type$3.buildStorage(SWQEFactory.java:109)
at com.github.seqware.queryengine.factory.SWQEFactory.getStorage(SWQEFactory.java:174)
at com.github.seqware.queryengine.factory.SWQEFactory.getQueryInterface(SWQEFactory.java:199)
at com.github.seqware.queryengine.impl.SimpleModelManager.<init>(SimpleModelManager.java:49)
at com.github.seqware.queryengine.impl.HBaseModelManager.<init>(HBaseModelManager.java:36)
at com.github.seqware.queryengine.impl.MRHBaseModelManager.<init>(MRHBaseModelManager.java:32)
at com.github.seqware.queryengine.factory.SWQEFactory.getModelManager(SWQEFactory.java:211)
at com.github.seqware.queryengine.system.importers.FeatureImporter.performImport(FeatureImporter.java:66)
at com.github.seqware.queryengine.system.importers.SOFeatureImporter.runMain(SOFeatureImporter.java:141)
at com.github.seqware.queryengine.system.importers.SOFeatureImporter.main(SOFeatureImporter.java:60)
[SeqWare Query Engine] 3 [main] FATAL org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - Unexpected exception during initialization, aborting
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:154)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:226)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:580)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:569)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:186)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:100)
at com.github.seqware.queryengine.impl.HBaseStorage.<init>(HBaseStorage.java:89)
at com.github.seqware.queryengine.factory.SWQEFactory$Storage_Type$3.buildStorage(SWQEFactory.java:109)
at com.github.seqware.queryengine.factory.SWQEFactory.getStorage(SWQEFactory.java:174)
at com.github.seqware.queryengine.factory.SWQEFactory.getQueryInterface(SWQEFactory.java:199)
at com.github.seqware.queryengine.impl.SimpleModelManager.<init>(SimpleModelManager.java:49)
at com.github.seqware.queryengine.impl.HBaseModelManager.<init>(HBaseModelManager.java:36)
at com.github.seqware.queryengine.impl.MRHBaseModelManager.<init>(MRHBaseModelManager.java:32)
at com.github.seqware.queryengine.factory.SWQEFactory.getModelManager(SWQEFactory.java:211)
at com.github.seqware.queryengine.system.importers.FeatureImporter.performImport(FeatureImporter.java:66)
at com.github.seqware.queryengine.system.importers.SOFeatureImporter.runMain(SOFeatureImporter.java:141)
at com.github.seqware.queryengine.system.importers.SOFeatureImporter.main(SOFeatureImporter.java:60)
I figured it out.The apache services were installed from the cloudera package. They weren't being restarted when the instance was being restarted and apparently, just running their script from the etc/init.d was the incorrect way to do it. I found the commands to restart them in the cloudera documentation.
I too faced this problem.I was able to solve this problem by providing jute.maxbuffer parameter while starting zookeeper.
For more info you can refer
https://issues.apache.org/jira/browse/SOLR-4793