Microsoft R Server rxSpark Execution - microsoft-r

rxRemoteHadoopMRCall()
====== xxxx.cloudapp.azure.com (Master HPA Process) has started run at Sat Mar 18 08:15:43 2017 ======
17/03/18 03:16:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where application
Warning: libjvm.so not found in /log/cloudera/parcels/MRS-9.0.1/hadoop, searching system-wide
Internal Error: Cannot reset hdfs internal params while connected to an hdfs file system.
Error in try({ :
Internal Error: Cannot reset hdfs internal params while connected to an hdfs file system.

I wonder if this is a problem with locating JRE. You can try setting JAVA_HOME for your user ($HOME/.bashrc) or for all users (etc/environment) on the node you're connecting to. It should point to a place that correctly resolves this path: $JAVA_HOME/jre/lib/amd64/server/libjvm.so

Related

Plain vanilla Apache Ignite cluster fails setting state back to ACTIVE

I've got a plain vanilla install of ignite 2.14, with the binaries downloaded from https://ignite.apache.org/download.cgi (exact link https://dlcdn.apache.org/ignite/2.14.0/apache-ignite-2.14.0-bin.zip). I'm on Windows 10, IGNITE_HOME is not set in the PATH (this is optional), and Ignite's using this java runtime:
OpenJDK Runtime Environment 1.8.0_201-2-redhat-b09 Oracle Corporation
OpenJDK 64-Bit Server VM 25.201-b09
I start an ignite node using the default configuration provided in the downloaded zip apache-ignite-2.14.0-bin.zip :
ignite.bat ..\config\default-config.xml
This starts fine. Following the instructions at https://ignite.apache.org/docs/latest/tools/control-script I can check the state and see I've got a single node cluster in state ACTIVE (the default-config.xml must not have native persistence enabled, so the cluster goes to ACTIVE state automatically).
I can then set the state to INACTIVE like so:
control.bat --set-state INACTIVE
This works fine. However if I set the state to active again like so:
control.bat --set-state ACTIVE
I get the error pasted below and the cluster stays in the INACTIVE state. I first came across this error when using Ignite in embedded server mode, but I can still reproduce it with a fresh out-of-the-box ignite install (not using embedded). I'm surprised that a plain vanilla install just calling a couple of basic commands falls over like this. Any idea what's happening?
This is the error:
C:\temp\apache-ignite-2.14.0-bin\bin>control.bat --set-state ACTIVE
Nov 17, 2022 4:27:17 PM
org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection
INFO: Client TCP connection established: /127.0.0.1:11211 Nov
17, 2022 4:27:17 PM
org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection
close INFO: Client TCP connection closed: /127.0.0.1:11211 Nov 17,
2022 4:27:17 PM org.apache.ignite.internal.client.util.GridClientUtils
shutdownNow WARNING: Runnable tasks outlived thread pool executor
service [owner=GridClientConnectionManager,
tasks=[java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask#6d7b4f4c]]
Control utility [ver. 2.14.0#20220929-sha1:951e8deb] 2022 Copyright(C)
Apache Software Foundation User: info Time: 2022-11-17T16:27:16.344
null suppressed:
Command [SET-STATE] finished with code: 4 Error stack trace: class
org.apache.ignite.internal.client.GridClientException: null
suppressed:
at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.handleClientResponse(GridClientNioTcpConnection.java:632)
at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.handleResponse(GridClientNioTcpConnection.java:563)
at org.apache.ignite.internal.client.impl.connection.GridClientConnectionManagerAdapter$NioListener.onMessage(GridClientConnectionManagerAdapter.java:691)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:116)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3734)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175)
at org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1211)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2508)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2273)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1910)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
at java.lang.Thread.run(Thread.java:748)
Control utility has completed execution at: 2022-11-17T16:27:17.642
Execution time: 1298 ms Press any key to continue . . .
It's a known issue, which is, unfortunately, not fixed yet.
As a workaround, you can execute the command with the autoconfirmation flag --yes, as shown below:
control.bat --set-state ACTIVE --yes

GraphDB Workbench would not start

After a VM shutdown, GraphDB Workbench would not start.
I have installed GraphDB on a cloud-hosted VM. Incidentally, the machine was shut down without stopping GraphDB. When trying to start it again, the Workbench would not start and the following message is displayed in the error log.
[ERROR] 2019-06-19 12:12:00,299 [Thread-10 | c.o.t.s.i.PluginManager]
Problem shutting down literals-index java.lang.RuntimeException:
com.ontotext.trree.transactions.TransactionException: Failed to
created journal file: /home/peio/graphdb-se-8.10.1/data/repositor
ies/bgnews/storage/literals-index/numerics.index.precommit
As Damyan suggested, delete literals-index in the storage folder and it will be rebuilt on start-up.

Live Migration Failure: unable to execute QEMU command 'migrate': Migration disabled: failed to allocate shared memory

I have a 2 node OpenStack Mitaka environment consisting of a controller/compute node and a compute node.
I've followed the setup guide to enable instance live migration using LVM block storage. I.e.: There's no shared storage backend, just local LVM block storage.
Using OpenStack Horizon to perform the live migration a success message is displayed, however, the migration is far from successful. This worked pretty much out-of-the-box with our Juno installation. I've exhausted Google and cannot find any other instances of people facing the same problem. I thought it might have been a time synchronisation problem so have set both nodes to UTC. Still the problems persists.
Source machine /var/log/nova/nova-compute.log
2016-08-12 15:56:42.120 2230 ERROR nova.virt.libvirt.driver [req-b71ea7b0-5fa8-4b57-92d2-4edec62135c2 b017d86d1143461a92a267d4b912c104 88c686f09e1b427fb750f5c00716f84e - - -] [instance: 5763b6b6-370c-448c-8e8f-8b71eafaa8f1] Migration operation has aborted
2016-08-12 15:56:42.470 2230 ERROR nova.virt.libvirt.driver [req-b71ea7b0-5fa8-4b57-92d2-4edec62135c2 b017d86d1143461a92a267d4b912c104 88c686f09e1b427fb750f5c00716f84e - - -] [instance: 5763b6b6-370c-448c-8e8f-8b71eafaa8f1] Live Migration failure: internal error: unable to execute QEMU command 'migrate': Migration disabled: failed to allocate shared memory
Target node /var/log/libvirt/libvirtd.log
2016-08-12 15:56:41.864+0000: 2170: error : qemuMonitorJSONGetMigrationStatsReply:2443 : internal error: info migration reply was missing return status
2016-08-12 15:56:41.864+0000: 2170: error : virNetClientProgramDispatchError:177 : Cannot open log file: '/var/log/libvirt/qemu/instance-0000006a.log': Device or resource busy
There are no other events captured in the source or target nova or libvirt logs.
I should also note that I am trying to use qemu+tcp (libvirt listening enabled, default tcp port, no auth) rather than qemu+ssh in order to keep things simple while testing. In fact, I intend to only use qemu+tcp anyway.
Which version of ubuntu did you deploy?
I had the same error with ubuntu 14.04 and mitaka version.
And I figured out that default kernel (3.13) makes this problem.
I upgraded the kernel from 3.13 to 4.40 and this problem is gone now.
I hope my experience help you solve this problem out.
Thanks

Glassfish 3.1.1 start-local-instance fails with JAXBException

I am setting up a GlassFish cluster following the guide at http://javadude.wordpress.com/2011/04/25/glassfish-3-1-clustering-tutorial/. I started from fresh installs of GlassFish 3.1.1. I also have the same architecture as in the guide: two nodes with one instance each. The DAS is on node1.
I've tried starting from scratch several times and am able to create the cluster, nodes and instances without issue. I also have the DAS communicating with node2 via SSH. However, each time when I get to the point where I attempt to start instance2 it fails:
$ ./asadmin start-local-instance --node node1 --sync normal instance2
Previous synchronization failed at Feb 23, 2012 2:41:53 PM
Will perform full synchronization.
Removing all cached state for instance instance2.
CLI802 Synchronization failed for directory config, caused by:
javax.xml.bind.JAXBException
- with linked exception:
[java.lang.ClassNotFoundException: com.sun.xml.bind.v2.ContextFactory]
Command start-local-instance failed.
I spent the day Googling and searching GlassFish's Jira, but couldn't find a solution to this issue. I'd very much appreciate any ideas you have on how to solve this problem.
My operating system is CentOS 5.7 and my Java version is 1.6.0_20
Unfortunately my instance directory is empty, I'm assuming because it never started. So there is no log file. I set AS_DEBUG=true but it gives no stack trace. The last debug lines before the error are
Removing all cached state for instance instance2.
Removing: /usr/local/glassfish3_1_1/glassfish/nodes/blade-50/instance2/config
Removing: /usr/local/glassfish3_1_1/glassfish/nodes/blade-50/instance2/applications
Removing: /usr/local/glassfish3_1_1/glassfish/nodes/blade-50/instance2/generated
Removing: /usr/local/glassfish3_1_1/glassfish/nodes/blade-50/instance2/lib
Removing: /usr/local/glassfish3_1_1/glassfish/nodes/blade-50/instance2/docroot
Got exception: javax.xml.bind.JAXBException
Acting on a tip from a user in the Glassfish forum, I learned that Java 1.6.0_20 is an older release of Java that is not supported by Glassfish 3.1.1. I worked with a sysadmin to get Java 1.6.0_31 installed on both nodes of the cluster and that did the trick--both instances start up without errors.

Java 1.5 crash on AIX 5

I have a problem with Java 1.5.0 for AIX. The error happens just when I log on with specific user on AIX (myuser). When I log on with other user java works ok.
The error come up even when I executed just "java -version" or simply "java" (of course, without quoting). I've tried executing it with the full path: /usr/java5/jre/bin/java but still fails.
There was installed the version 1.4 of java on system too. So the $PATH variable for the user contained /usr/java14/jre/bin, but I removed that value, I even uninstalled that version of java (1.4) so that just java 5 exists on the system, but the error continues.
If I execute "java -fullversion" it doesn't crash.
This is part of the error (the full output is very long):
JVMJ9VM011W Unable to load j9dmp23: No such file or directory
JVMJ9VM011W Unable to load j9jit23: No such file or directory
JVMJ9VM011W Unable to load j9gc23: No such file or directory
JVMJ9VM011W Unable to load j9vrb23: No such file or directory
Unhandled exception
Type=Illegal instruction vmState=0x00000000
J9Generic_Signal_Number=00000010 Signal_Number=00000004 Error_Value=00000000
Signal_Code=0000001e
Handler1=F0719CC8 Handler2=F0714F5C
.....
Target=2_30_20091103_45935_bHdSMr (AIX 5.3)
CPU=ppc (4 logical CPUs) (0x7d0000000 RAM)
JavaVMInitArgs.nOptions=14:
-Xjcl:jclscar_23
-Dcom.ibm.oti.vm.bootstrap.library.path=/usr/java5/jre/bin
-Dsun.boot.library.path=/usr/java5/jre/bin
-Djava.library.path=/usr/java5/jre/bin:/usr/java5/jre/bin:/usr/java5/jre/bin/classic:/usr/java5/jre/bin:/sqllib/lib:/home/myuser/comm:/home/myuser/sys:/home/myuser/bin:/db2util/db2adm/sqllib/lib64:/usr/java5/jre/bin/j9vm:/usr/lib
-Djava.home=/usr/java5/jre
-Djava.ext.dirs=/usr/java5/jre/lib/ext
-Duser.dir=/home/myuser
_j2se_j9=70912 (extra info: F070EA2C)
-Xdump
vfprintf (extra info: 300017A4)
-Dinvokedviajava
-Djava.class.path=/db2util/db2adm/sqllib/java/db2java.zip:/db2util/db2adm/sqllib/java/db2jcc.jar:/db2util/db2adm/sqllib/java/sqlj.zip:/db2util/db2adm/sqllib/function:/db2util/db2adm/sqllib/java/db2jcc_license_cu.jar:.
vfprintf
_port_library (extra info: F070EE30)
Note: "Enable full CORE dump" in smit is set to FALSE and as a result there will be limited threading information in core file.
Note: dump may be truncated if "ulimit -c" is set too low
Generated system dump: {default OS core name}
(no Thread object associated with thread)
(no Thread object associated with thread)
Unhandled exception in signal handler
ksh: 2179192 IOT/Abort trap(coredump)
I found the error. The problem is a line on the .profile which sets the environment variable LIBPATH:
export LIBPATH=/home/myuser/sys
I deleted that line in the .profile and java worked.