How to do tracing of caliper benchmarks execution ? - caliper

How can I see console output in "running" mode (in --debug I can see it). Does caliper creates execution log? How to access it?

If I remember correctly, Caliper 0.5 doesn't have very good support for this. The reason is that if you're really performing a benchmark, extra I/O may degrade performance.
Caliper 1.0 gives you a few more options. By default, console output is still hidden, but adding --verbose will display any console output from the worker. It will also send logging to ~/.caliper/log/. Logging configuration can be overridden in ~/.caliper/logging.properties. If you need more control over output and logging, I recommend checking out Caliper 1.0 from HEAD and giving it a try. We hope to have a pre-built beta release soon.

Related

How to reduce the size of dll/wasm compiled by aspnet/blazor?

I notice that the file size of *.wasm compiled by Rust is acceptable . However , a minimal HelloWorld compiled by AspNet/Blazor will take up almost 2.8MB .
mono.wasm 1.75MB
mscorlib.dll 1.64MB
*.dll ....
If I understand correctly , the mono.wasm is the VM that runs in browser and runs the dll we write . Does that mean no matter what we do , we cannot make the size of files less than 1.75MB ? If not , is there a way to reduce the file size ?
Yes, 2.8 MBytes is quite a large payload for a 'Hello World' applications. However, Blazor is still very much an experimental technology, which is not ready for production use yet. There are numerous reasons why the generated output is so large at the moment:
Your current application runs in an interpreted mode, where the mono.wasm file ships the CLR to your browser, allowing it to execute your DLL. A faster, and more size efficient approach would be to use Ahead of Time Compilation (AOT) as described in this article. This would allow the compiler to strip out any library functions that are not used, giving a highly optimised output.
The features of the WebAssembly runtime itself are quite limited, future version will add garbage collection and various other capabilities that Blazor will be able to use directly. At the moment mono.wasm includes its own garbage collector.
The Blazor project itself has a number of open issues describing various optimisations which are being actively worked on. It already performs tree-shaking and various other optimisations, but this type of work takes time.
Currently (2021), a hello world Blazor WASM application (Visual Studio project template) downloads over 17 MB of data. When gzip is used, this got reduced to 7 MB - which is really huge if we think about the fact that no application code/logic is included yet!
But I found out that it seems the linker was not active during debugging. If we publish the application in release mode (-c Release switch), only necessary files were loaded. This increases the transfer size to 5.6 MB or even 2.4 MB with gzip activated. You can also see this in the size of the published folder:
$ dotnet publish --output publish_debug -c Debug
$ dotnet publish --output publish_release -c Release
$ du -hs publish_debug/
30M publish_debug/
$ du -hs publish_release/
11M publish_release/
It's still a noticeable amount of data. However, this information may help others finding this questions because of the much larger 17/7 MB shown in debug mode.
Since the question is from 2018, it may be also interested to mention that framework caching was improved in 3.2.0-preview2. This means: The runtime and framework are stored in the browser cache after fetching them initially from the server. Since this is handled by JavaScript, no further requests are made to this files after they got cached! The server may would respond with 304 Not Modified, but it's still some overhead which we haven't any more now.
This also means that they only appear on the first page load in the network tab! If you want to measure the loading time without cache, delete the cache for those domain. This has to be done manually! Checking the no cache checkbox in the browser console is not enough, since it seems that Blazor uses the local storage with JS.

Issue with executing spark sql job using oozie action

Facing a weird issue, trying to execute a spark-sql(Spark2) job using oozie action but the behavior of execution is quite weird, at times it executes fine but sometimes it continues to be in "Running" state forever, on checking the logs got the below issue.
WARN org.apache.spark.scheduler.cluster.YarnClusterScheduler` - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
The strange thing is that we have already provided sufficient resources, the same can be seen from spark environment variables as well and as well under the cluster resources(cluster has sufficient cores and RAM).
<spark-opts>--executor-memory 10G --num-executors 7 --executor-cores 3 --driver-memory 8G --driver-cores 2</spark-opts>
With the same configuration sometimes it is executing fine as well. Are we missing something?
The issue was related to jar conflict,following are the suggestions to identify the same.
a)Check the maven dependency tree to make sure there is no transitive dependency conflict.
b)While spark job is running check the environment variables being used using Spark UI.
c)Resolve the conflict and run a maven clean package.

In Go, how do you profile transitive init() cpu usage?

I'm working on a project with a very large monorepo. When I run tests, the tests do not take very long to run the actual test case, but there is a lot of setup time before running the test.
I've tried go test -i not seeing much of a difference. I think that would suggest that the time is not a compilation time issue, so my next step is to profile everything that happens before running the test case.
There are many transitive dependencies and I would prefer to not manually look through the graph adding printlns to get the timings. Are there any tools to profile all the transitive initialization that happens in Go before running a test?

Remote Debugging with Squish IDE

I want to do remote debugging of Squish application.
I am following document at:
http://kb.froglogic.com/display/KB/Configuring+a+remote+squishserver
for the same.
Step 1 and Step 2 went well, even I able to see the logs in terminal from remote application.
But, the debug point is not hitting in Squish IDE. Even the debug view shows nothing though I have done the exact steps given at Step 3, even restart of Squish IDE didn't help.
Whether or not Squish stops at a breakpoint (I suppose that's what you meant when you wrote "debug point") is independant of where the squishserver process is running. You can verify that the remote server is used simply by launching a test: you should see some output in the console where squishserver is started about incoming network connections.
When running squish remotely you could add the flag: "--reportgen xml3.3,xml3report" This will create a report with all the information about your run. You can then import this report into squshIDE and use "vpdiff" tool to analyze your failure.
There is a command line tool called "vpdiff" which comes in SQUISHDIR/bin.
There is an article about it here:
https://www.froglogic.com/blog/analysing-test-reports-from-automated-executions-using-the-squish-ide/

How to get around memory error with karma & phantomjs

We're running tests using karma and phantomjs Last week, our tests mysteriously started crashing phantomJS with an error of -1073741819.
Based on this thread for Chutzpah it appears that code indicates a native memory failure with PhantomJS.
Upon further investigation, we are consistently seeing phantom crash around 750MB of memory.
Is there a way to configure Karma so that it does not run up against this limit? Or a way to tell it to flush phantom?
We only have around 1200 tests so far. We're about 1/4 of the way through our project, so 5000 UI tests doesn't seem out of the question.
Thanks to the StackOverflow phenomenon of posting a question and quickly discovering an answer, we solved this by adding gulp tasks. Before we were just running karma start at the command line. This spun up a single instance of phantomjs that crashed when 750MB was reached.
Now we have a gulp command for each one of our sections of tests, e.g. gulp common-tests and gulp admin-tests and gulp customer-tests
Then a single gulp karma that runs each of those groupings. This allows each gulp command to have its own instance of phantom, and therefore stay underneath that threshold.
We ran into similar issue. Your approach is interesting and certainly side steps the issue. However, be prepared to face it again later.
I've done some investigation and found the cause of memory growth (at least in our case). Turns out when you use:
beforeEach(inject(SomeActualService)){ .... }
the memory taken up by SomeActualService does not get released at the end of the describe block and if you have multiple test files where you inject the same service (or other injectable objects) more memory will be allocated for it again.
I have a couple of ideas on how to avoid this:
1. create mock objects and never use inject to get real objects unless you are in the test that tests that module. This will require writing tons of extra code.
2. Create your own tracker (for tests only) for injectable objects. That way they can be loaded only once and reused between test files.
Forgot to mention: We are using angular 1.3.2, Jasmine 2.0 and hit this problem around 1000 tests.
I was also running into this issue after about 1037 tests on Windows 10 with PhantomJS 1.9.18.
It would appear as ERROR [launcher]: PhantomJS crashed. after the RAM for the process would exceed about 800-850 MB.
There appears to be a temporary fix here:
https://github.com/gskachkov/karma-phantomjs2-launcher
https://www.npmjs.com/package/karma-phantomjs2-launcher
You install it via npm install karma-phantomjs2-launcher --save-dev
But then need to use it in karma.conf.js via
config.set({
browsers: ['PhantomJS2'],
...
});
This seems to run the same set of tests while only using between 250-550 MB RAM and without crashing.
Note that this fix works out of the box on Windows and OS X, but not Linux (PhantomJS2 binaries won't start). This affects pushes to Travis CI.
To work around this issue on Debian/Ubuntu:
sudo apt-get install libicu52 libjpeg8 libfontconfig libwebp5
This is a problem with PhantomJS. According to another source, PhantomJS only runs the garbage collector when the page is closed, and this only happens after your tests run. Other browsers work fine because their garbage collectors work as expected.
After spending a few days on the issue, we concluded that the best solution was to split tests into groups. We had grunt create a profile for each directory dynamically and created a command that runs all those profiles. For all intents and purposes, it works just the same.
We had a similar issue on linux (ubuntu), that turned out to be the amount of memory segments that the process can manage:
$ cat /proc/sys/vm/max_map_count
65530
Then run this:
$ sudo bash -c 'echo 6553000 > /proc/sys/vm/max_map_count'
Note the number was multiplied by 100.
This will change the session settings. If it solves the problem, you can set it up for all future sessions:
$ sudo bash -c 'echo vm.max_map_count = 6553000 > /etc/sysctl.d/60-max_map_count.conf'
Responding to an old question, but hopefully this helps ...
I have a build process which a CI job runs in a command line only linux box. So, it seems that PhantomJS is my only option there. I have experienced this memory issue locally on my mac, but somehow it doesn't happen on the linux box. My solution was to add another test command to my package.json to run karma using Chrome, and run that locally to run my tests. When pushed up, Jenkins would kick off the regular test command, running PhantomJS.
Install this plugin: https://github.com/karma-runner/karma-chrome-launcher
Add this to package.json
"test": "karma start",
"test:chrome": "karma start --browsers Chrome"