SLF4j multiple bindings exception - apache

error screenshot here
I am trying to stream real time twitter data into HDFS using Apache flume. And when i run the command ./flume-ng agent -c /usr/local/apache-flume-1.4.0-bin/conf/ -f /usr/local/apache-flume-1.4.0-bin/conf/flume.conf -n TwitterAgent it gives me a SLF4j exception. It does not allow the program to run further. Any suggestions to solve this issue would be of immense help

Related

How to fix LevelDB library load error when running RSKj node on a Windows machine?

I am trying to run RSK blockchain node RSKj on a Windows machine. When I run this line in a terminal:
C:\Users\yemode> java -cp C:\Users\yemode\Downloads\Programs\rskj-core-3.0.1-IRIS-all.jar co.rsk.Start
the RSKj node starts running, but I get the following error:
Cannot load secp256k1 native library: java.lang.Exception: No native library is found for os.name=Windows and os.arch=x86. path=/org/bitcoin/native/Windows/x86
Exception in thread "main" java.lang.RuntimeException: Can't initialize database
at org.ethereum.datasource.LevelDbDataSource.init(LevelDbDataSource.java:110)
at org.ethereum.datasource.LevelDbDataSource.makeDataSource(LevelDbDataSource.java:70)
at co.rsk.RskContext.buildTrieStore(RskContext.java:1015)
at co.rsk.RskContext.buildAbstractTrieStore(RskContext.java:935)
at co.rsk.RskContext.getTrieStore(RskContext.java:416)
at co.rsk.RskContext.buildRepositoryLocator(RskContext.java:1057)
at co.rsk.RskContext.getRepositoryLocator(RskContext.java:384)
at co.rsk.RskContext.getTransactionPool(RskContext.java:353)
at co.rsk.RskContext.buildInternalServices(RskContext.java:829)
at co.rsk.RskContext.buildNodeRunner(RskContext.java:821)
at co.rsk.RskContext.getNodeRunner(RskContext.java:302)
at co.rsk.Start.main(Start.java:34)
What could be the problem here?
This is actually a warning, not an error, though it may seem like the latter. This means that on your OS and architecture, that particular library does not exist, so it falls back to a different implementation (using a non-native library). In this case, the block verification is slower, but otherwise RSKj should continue to function properly.
Something that might help you to overcome the “slowness” of the initial sync is the --import flag. See the reference in the CLI docs for RSKj.
Also you can send an RPC to ensure that your node is running OK. Run the following curl command in your terminal
curl \
-X POST \
-H “Content-Type:application/json” \
--data ‘{“jsonrpc”:“2.0",“method”:“eth_blockNumber”,“params”:[],“id”:67}’ \
http://localhost:4444
The response should be similar to this one
{“jsonrpc”:“2.0",“id”:67,“result”:“0x2b12”}
where the result is your last block number

LLVM profiling on child process

I want to extract execution traces (e.g., visited basic blocks) when testing Apache server (httpd). Since my work is based on LLVM infrastructure, I choose to use clang instrumentation based profiling as follows:
clang -fprofile-instr-generate ${options to compile httpd} -o httpd
export LLVM_PROFILE_FILE=code-%p.profraw
sudo -E ./httpd -k start # output a .profraw
curl ${url} # send a request
sudo -E ./httpd -k stop # output another .profraw
The compilation of instrumented httpd works well.
However, I want to track httpd's request handling which is executed in a separate child process. The output .profraw does not record any execution from child processes. As a result, I can only access the execution traces of starting and closing the server. How can I get the .profraw including request handling?
Not restricted in clang profiling. Any solution compatible with LLVM is great. Thanks!
Update
From the logs, it turns out the child process whose owner is "daemon" has no write permission to the files
LLVM Profile Error: Failed to write file "code-94752.profraw": Permission denied
Problem solved
The problem is the collision of prof file names. The process httpd -k start create multiple child processes as workers. When using LLVM_PROFILE_FILE=code-%p.profraw, their pid %p is same. So the main process is owned by root and creates the prof file first. Then latter process owned by daemon cannot write that file.
Solution: Use LLVM_PROFILE_FILE=code-%9m.profraw (%Nm instead of %p) to avoid name collisions.

Flink job started from another program on YARN fails with "JobClientActor seems to have died"

I'm new flink user and I have the following problem.
I use flink on YARN cluster to transfer related data extracted from RDBMS to HBase.
I write flink batch application on java with multiple ExecutionEnvironments (one per RDB table to transfer table rows in parrallel) to transfer table by table sequentially (because call of env.execute() is blocking).
I start YARN session like this
export YARN_CONF_DIR=/etc/hadoop/conf
export FLINK_HOME=/opt/flink-1.3.1
export FLINK_CONF_DIR=$FLINK_HOME/conf
$FLINK_HOME/bin/yarn-session.sh -n 1 -s 4 -d -jm 2048 -tm 8096
Then I run my application on YARN session started via shell script transfer.sh. Its content is here
#!/bin/bash
export YARN_CONF_DIR=/etc/hadoop/conf
export FLINK_HOME=/opt/flink-1.3.1
export FLINK_CONF_DIR=$FLINK_HOME/conf
$FLINK_HOME/bin/flink run -p 4 transfer.jar
When I start this script from command line manually it works fine - jobs are submitted to YARN session one by one without errors.
Now I should be able to run this script from another java program.
For this aim I use
Runtime.exec("transfer.sh");
(maybe are there better ways to do this? I have seen at REST API but there are some difficulties because job manager is proxied by YARN).
At the beginning is works as usually - first several jobs are submitted to session and finished successfully. But the following jobs are not submitted to YARN session.
In /opt/flink-1.3.1/log/flink-tsvetkoff-client-hadoop-dev1.log I see error (and no another errors found in DEBUG level)
The program execution failed: JobClientActor seems to have died before the JobExecutionResult could be retrieved.
I have tried to analyse this problem by myself and found out that this error has occurred in JobClient class while sending ping request with timeout to JobClientActor (i.e. YARN cluster).
I tried to increase multiple heartbeat and timeout options like akka.*.timeout, akka.watch.heartbeat.* and yarn.heartbeat-delay options but it doesn't solve the problem - new jobs are not submit to YARN session from CliFrontend.
The environment for both case (manual call and call from another program) is the same. When I call
$ ps axu | grep transfer
it will give me output
/usr/lib/jvm/java-8-oracle/bin/java -Dlog.file=/opt/flink-1.3.1/log/flink-tsvetkoff-client-hadoop-dev1.log -Dlog4j.configuration=file:/opt/flink-1.3.1/conf/log4j-cli.properties -Dlogback.configurationFile=file:/opt/flink-1.3.1/conf/logback.xml -classpath /opt/flink-1.3.1/lib/flink-metrics-graphite-1.3.1.jar:/opt/flink-1.3.1/lib/flink-python_2.11-1.3.1.jar:/opt/flink-1.3.1/lib/flink-shaded-hadoop2-uber-1.3.1.jar:/opt/flink-1.3.1/lib/log4j-1.2.17.jar:/opt/flink-1.3.1/lib/slf4j-log4j12-1.7.7.jar:/opt/flink-1.3.1/lib/flink-dist_2.11-1.3.1.jar:::/etc/hadoop/conf org.apache.flink.client.CliFrontend run -p 4 transfer.jar
I also tried to update flink to 1.4.0 release or change parallelism of job (even to -p 1) but error has still occurred.
I have no idea what could be different? Is any workaround by the way?
Thank you for any help.
Finally I find out how to resolve that error
Just replace Runtime.exec(...) with new ProcessBuilder(...).inheritIO().start().
I really don't know why the call of inheritIO helps in that case because as I understand it just redirects IO streams from child process to parent process.
But I have checked that if I comment out this line of code the program begins to fall again.

Hive Server 2 Hangs On Start / Won't Start

I am trying to start hiveserver2, by going to the bin folder of my Hive install and typing hiveserver2. However, nothing happens - it just hangs there, and when I check if anything is running on Hive ports (the interface on 10002 for example) there is nothing, nor anything in netstat.
Initially I had errors about SLF4J:
SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
But I solved that by moving the /usr/lib/hive/lib/log4j-slf4j-impl-2.4.1.jar in to the /tmp directory.
Now when I run bin/hiveserver2 it hangs for quite a while, with no output, and then just returns to the command line - and the Hive server isn't started. I'm struggling to find any logs either.
There is a chance that the hiveserver2 running without showing any information. Try this answer with attached argument flag showing running log

Couldn't run loadjava on user schema to load dbwsclientws.jar dbwsclientdb102.jar

I am trying to load oracle webservice client jars to my schema. I did set the PATH to inlcude:
/u01/app/oracle/product/10.2.0/db_1/bin
When I try to run loadjava as "loadjava -u myschema/myscehmapwd -r -v -f -genmissing dbwsclientws.jar dbwsclientdb102.jar"
I am getting error:
Exception in thread "main" java.lang.NoClassDefFoundError: oracle/aurora/server/tools/loadjava/LoadJavaMain. Does it mean that jvm is not setup on the box? How can I check if the jvm is enabled or not?
I am running it on Oracle 10g in UNIX environment.
Any help with the issue is greatly appreciated.
Look at the classpath in the documentation to ensure that you have all of the files mentioned in the classpath, such as $ORACLE_HOME/javavm/lib/aurora.zip