How to fix LevelDB library load error when running RSKj node on a Windows machine? - rsk

I am trying to run RSK blockchain node RSKj on a Windows machine. When I run this line in a terminal:
C:\Users\yemode> java -cp C:\Users\yemode\Downloads\Programs\rskj-core-3.0.1-IRIS-all.jar co.rsk.Start
the RSKj node starts running, but I get the following error:
Cannot load secp256k1 native library: java.lang.Exception: No native library is found for os.name=Windows and os.arch=x86. path=/org/bitcoin/native/Windows/x86
Exception in thread "main" java.lang.RuntimeException: Can't initialize database
at org.ethereum.datasource.LevelDbDataSource.init(LevelDbDataSource.java:110)
at org.ethereum.datasource.LevelDbDataSource.makeDataSource(LevelDbDataSource.java:70)
at co.rsk.RskContext.buildTrieStore(RskContext.java:1015)
at co.rsk.RskContext.buildAbstractTrieStore(RskContext.java:935)
at co.rsk.RskContext.getTrieStore(RskContext.java:416)
at co.rsk.RskContext.buildRepositoryLocator(RskContext.java:1057)
at co.rsk.RskContext.getRepositoryLocator(RskContext.java:384)
at co.rsk.RskContext.getTransactionPool(RskContext.java:353)
at co.rsk.RskContext.buildInternalServices(RskContext.java:829)
at co.rsk.RskContext.buildNodeRunner(RskContext.java:821)
at co.rsk.RskContext.getNodeRunner(RskContext.java:302)
at co.rsk.Start.main(Start.java:34)
What could be the problem here?

This is actually a warning, not an error, though it may seem like the latter. This means that on your OS and architecture, that particular library does not exist, so it falls back to a different implementation (using a non-native library). In this case, the block verification is slower, but otherwise RSKj should continue to function properly.
Something that might help you to overcome the “slowness” of the initial sync is the --import flag. See the reference in the CLI docs for RSKj.
Also you can send an RPC to ensure that your node is running OK. Run the following curl command in your terminal
curl \
-X POST \
-H “Content-Type:application/json” \
--data ‘{“jsonrpc”:“2.0",“method”:“eth_blockNumber”,“params”:[],“id”:67}’ \
http://localhost:4444
The response should be similar to this one
{“jsonrpc”:“2.0",“id”:67,“result”:“0x2b12”}
where the result is your last block number

Related

LLVM profiling on child process

I want to extract execution traces (e.g., visited basic blocks) when testing Apache server (httpd). Since my work is based on LLVM infrastructure, I choose to use clang instrumentation based profiling as follows:
clang -fprofile-instr-generate ${options to compile httpd} -o httpd
export LLVM_PROFILE_FILE=code-%p.profraw
sudo -E ./httpd -k start # output a .profraw
curl ${url} # send a request
sudo -E ./httpd -k stop # output another .profraw
The compilation of instrumented httpd works well.
However, I want to track httpd's request handling which is executed in a separate child process. The output .profraw does not record any execution from child processes. As a result, I can only access the execution traces of starting and closing the server. How can I get the .profraw including request handling?
Not restricted in clang profiling. Any solution compatible with LLVM is great. Thanks!
Update
From the logs, it turns out the child process whose owner is "daemon" has no write permission to the files
LLVM Profile Error: Failed to write file "code-94752.profraw": Permission denied
Problem solved
The problem is the collision of prof file names. The process httpd -k start create multiple child processes as workers. When using LLVM_PROFILE_FILE=code-%p.profraw, their pid %p is same. So the main process is owned by root and creates the prof file first. Then latter process owned by daemon cannot write that file.
Solution: Use LLVM_PROFILE_FILE=code-%9m.profraw (%Nm instead of %p) to avoid name collisions.

Flink job started from another program on YARN fails with "JobClientActor seems to have died"

I'm new flink user and I have the following problem.
I use flink on YARN cluster to transfer related data extracted from RDBMS to HBase.
I write flink batch application on java with multiple ExecutionEnvironments (one per RDB table to transfer table rows in parrallel) to transfer table by table sequentially (because call of env.execute() is blocking).
I start YARN session like this
export YARN_CONF_DIR=/etc/hadoop/conf
export FLINK_HOME=/opt/flink-1.3.1
export FLINK_CONF_DIR=$FLINK_HOME/conf
$FLINK_HOME/bin/yarn-session.sh -n 1 -s 4 -d -jm 2048 -tm 8096
Then I run my application on YARN session started via shell script transfer.sh. Its content is here
#!/bin/bash
export YARN_CONF_DIR=/etc/hadoop/conf
export FLINK_HOME=/opt/flink-1.3.1
export FLINK_CONF_DIR=$FLINK_HOME/conf
$FLINK_HOME/bin/flink run -p 4 transfer.jar
When I start this script from command line manually it works fine - jobs are submitted to YARN session one by one without errors.
Now I should be able to run this script from another java program.
For this aim I use
Runtime.exec("transfer.sh");
(maybe are there better ways to do this? I have seen at REST API but there are some difficulties because job manager is proxied by YARN).
At the beginning is works as usually - first several jobs are submitted to session and finished successfully. But the following jobs are not submitted to YARN session.
In /opt/flink-1.3.1/log/flink-tsvetkoff-client-hadoop-dev1.log I see error (and no another errors found in DEBUG level)
The program execution failed: JobClientActor seems to have died before the JobExecutionResult could be retrieved.
I have tried to analyse this problem by myself and found out that this error has occurred in JobClient class while sending ping request with timeout to JobClientActor (i.e. YARN cluster).
I tried to increase multiple heartbeat and timeout options like akka.*.timeout, akka.watch.heartbeat.* and yarn.heartbeat-delay options but it doesn't solve the problem - new jobs are not submit to YARN session from CliFrontend.
The environment for both case (manual call and call from another program) is the same. When I call
$ ps axu | grep transfer
it will give me output
/usr/lib/jvm/java-8-oracle/bin/java -Dlog.file=/opt/flink-1.3.1/log/flink-tsvetkoff-client-hadoop-dev1.log -Dlog4j.configuration=file:/opt/flink-1.3.1/conf/log4j-cli.properties -Dlogback.configurationFile=file:/opt/flink-1.3.1/conf/logback.xml -classpath /opt/flink-1.3.1/lib/flink-metrics-graphite-1.3.1.jar:/opt/flink-1.3.1/lib/flink-python_2.11-1.3.1.jar:/opt/flink-1.3.1/lib/flink-shaded-hadoop2-uber-1.3.1.jar:/opt/flink-1.3.1/lib/log4j-1.2.17.jar:/opt/flink-1.3.1/lib/slf4j-log4j12-1.7.7.jar:/opt/flink-1.3.1/lib/flink-dist_2.11-1.3.1.jar:::/etc/hadoop/conf org.apache.flink.client.CliFrontend run -p 4 transfer.jar
I also tried to update flink to 1.4.0 release or change parallelism of job (even to -p 1) but error has still occurred.
I have no idea what could be different? Is any workaround by the way?
Thank you for any help.
Finally I find out how to resolve that error
Just replace Runtime.exec(...) with new ProcessBuilder(...).inheritIO().start().
I really don't know why the call of inheritIO helps in that case because as I understand it just redirects IO streams from child process to parent process.
But I have checked that if I comment out this line of code the program begins to fall again.

Creating network files in SUMO using NETCONVERT

Problem when calling netconvert in sumo:
I am trying to create my own scenario for simulation purposes.
I am using OpenStreetMaps for this.
python osmWebWizard.py
opens the browser and I select the area which I download.
netconvert --osm-files osm_bbox.osm.xml -o osm.net.xml
The error message I get is
Error: Cannot import network data without PROJ-Library. Please install packages proj before building sumo
Warning: Environment variable SUMO_HOME is not set, using built in type maps.
Quitting (on error).
My attempt to fix the problem is:
sudo apt-get install libproj*
But it seems like a dead end there and I am out of options.
Thank you.
EDIT
I have a gut feeling it has to do with libproj0 not being available anymore.

Publishing an .apk into IBM Application Center from Application Center command-line tools

I'm trying to publish an .apk into my Application Center through console. I've followed this note but it doesn't work in my environment:
https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/7.1/moving-production/distributing-mobile-applications-with-application-center/#cmdLineTools
If I type :
./acdeploytool.sh /home/miguel/Downloads/HelloWorldMyHelloAndroid.apk
I get this error message:
FWLAC0803E: Unable to connect:
Connection refused
Perhaps the server or context is wrongly specified.
File:/home/myUser/Downloads/HelloWorldMyHelloAndroid.apk
And if I try another way using this java command:
java com.ibm.appcenter.Upload -f http://localhost:9080 -c applicationcenter -u demo -p demo /home/myUser/Downloads/HelloWorldMyHelloAndroid.apk
I get this one:
Error: Could not find or load main class com.ibm.appcenter.Upload
I don't get any errors when I do this 'publish' operation directly in Application Center or through MobileFirst Studio.
Miguel, whether you use the script or the Java command, you need to specify the arguments to use. Please try the following:
./acdeploytool.sh -s http://localhost:9080 -c applicationcenter -u demo -p demo /home/miguel/Downloads/HelloWorldMyHelloAndroid.apk
I tried a similar command in my environment and was able to successfully deploy the apk to Application Center. If the command still does not work, make sure that the host/port that you are using are correct, and that the username and password are valid.
For the Java command that you executed, I see a few problems. First, the -cp argument needs to be specified in order to add the applicationcenterdeploytool.jar and json4j.jar files to the classpath. Next, the command shows "-f", but it should be "-s" to specify the server. Lastly, the path that was specified for the .apk is different than what you specified in the first command: myUser vs. miguel. So make sure that the correct path is used. If there are any further questions, let me know. Thanks.

Scripting LLDB to obtain a stack trace after a crash

I'm trying to add the capability of generating a stack trace from a core dump on mac automatically when one of our tests crashes.
I was able to do it pretty easily on linux with
gdb --batch --quiet -ex "thread apply all bt" -ex "quit" <binary> <core file> 2> /dev/null
But I'm having some trouble doing the same on mac (OSX 10.8) with lldb. To start off, the version of lldb I'm using is lldb-310.2.37.
My initial approach was to use the -s option and pass in a script file like so:
target create -c <core file> <binary>
thread backtrace all
quit
Initially I had some trouble which I think was caused by missing a newline at the end of the script file which caused lldb to not exit, but after that was fixed, I'm getting the following:
Executing commands in 'lldbSource'.
(lldb) target create -c <core file> <binary>
Core file '<core file>' (x86_64) was loaded.
(lldb) thread backtrace all
error: Aborting reading of commands after command #1: 'thread backtrace all' failed with error: invalid thread
Aborting after_file command execution, command file: 'lldbSource' failed.
The funny thing is, after that happens, we're still running lldb, and issuing 'thread backtrace all' manually works just fine.
So, approach #2 was to create a python script and use their python API (I tried this before figuring out the initial blocker I described was due to a missing newline).
My script:
import lldb
debugger = lldb.SBDebugger.Create()
target = debugger.CreateTarget('<binary>')
if target:
process = target.LoadCore('<core file>')
if process:
print process.exit_description
for thread in process:
print 'Thread %s:' % str(thread.GetThreadID())
print '\n'.join(str(frame) for frame in thread)
The problem I'm having with this approach is that process.exit_description is returning None (and so is every other thing I've tried; LLDB's python API documentation is almost completely useless).
The output I'm looking for from that call is something similar to the following:
Process 0 stopped
* thread #1: tid = 0x0000, 0x00007fff8aca4670 libsystem_c.dylib`strlen + 16, stop reason = signal SIGSTOP
frame #0: 0x00007fff8aca4670 libsystem_c.dylib`strlen + 16
libsystem_c.dylib`strlen + 16:
-> 0x7fff8aca4670: pcmpeqb (%rdi), %xmm0
0x7fff8aca4674: andl $0xf, %ecx
0x7fff8aca4677: shll %cl, %eax
0x7fff8aca4679: pmovmskb %xmm0, %ecx
This is output automatically by LLDB proper when loading a core file. I don't necessarily need the assembly dump, but I want at least the thread, frame and reason.
I think the first approach I used, if it could be made to work, would be ideal, but either way is OK for me. I don't have control over the LLDB version that's going to be used unfortunately, so I can't just update to latest and see if it's a bug that was fixed.
Other approaches to get the desired output are also welcome. For context, this is going to be called from a perl script.
The --batch command line argument is supported in the version of lldb that ships with Xcode 7.2 (lldb-340.4.119), and possibly earlier.
It's not documented in the man page, but it is documented in lldb --help:
-b
--batch
Tells the debugger to running the commands from -s, -S, -o & -O,
and then quit. However if any run command stopped due to a signal
or crash, the debugger will return to the interactive prompt at the
place of the crash.
These other command line options are useful for lldb automation:
-o
--one-line
Tells the debugger to execute this one-line lldb command after any
file provided on the command line has been loaded.
-k
--one-line-on-crash
When in batch mode, tells the debugger to execute this one-line
lldb command if the target crashes.
Using these facilities, I pieced together the following command:
ulimit -c unlimited && (<binary> || (lldb -c `ls -t /cores/* | head -n1` \
--batch -o 'thread backtrace all' -o 'quit' && exit 1))
This command:
Enables core dumps
Runs the executable <binary>
If it fails, runs lldb on the most recently created core dump in /cores (hopefully the right one)
Prints the backtraces of all threads
Exits with a non-zero status, so that this command can be embedded in other workflows (CI scripts, Makefiles, etc.)
I would have preferred to run lldb on <binary> directly, so that the command does not rely on guessing at the correct core file. But lldb still appears to lack a way to cause it to exit with a non-zero exit status -- an equivalent of GDB's -return-child-result option or quit 1 command -- so I would have no way of knowing if the debugged program was successful or not. I filed an issue requesting such a feature.
TOT lldb from lldb.llvm.org has a new "--batch" mode to work pretty much like the gdb batch mode, and fixes some bugs that made command-line source commands behave better. If you can build your own lldb, you can get these fixes now, otherwise you'll have to wait till the next Xcode update.
The exit_description is None because your process didn't exit, it crashed. Note that at least on OS X several threads can have had simultaneous exceptions by the time the process gets around to crashing, it isn't really useful to say the process crashed. You have to ask the threads. The stop status that lldb prints out when a thread stops is available with the GetStatus method.
stream = lldb.SBStream()
process.threads[0].GetStatus(stream)
print stream.GetData()
That doesn't seem to have a very useful help string.