How to reduce the size of dll/wasm compiled by aspnet/blazor? - mono

I notice that the file size of *.wasm compiled by Rust is acceptable . However , a minimal HelloWorld compiled by AspNet/Blazor will take up almost 2.8MB .
mono.wasm 1.75MB
mscorlib.dll 1.64MB
*.dll ....
If I understand correctly , the mono.wasm is the VM that runs in browser and runs the dll we write . Does that mean no matter what we do , we cannot make the size of files less than 1.75MB ? If not , is there a way to reduce the file size ?

Yes, 2.8 MBytes is quite a large payload for a 'Hello World' applications. However, Blazor is still very much an experimental technology, which is not ready for production use yet. There are numerous reasons why the generated output is so large at the moment:
Your current application runs in an interpreted mode, where the mono.wasm file ships the CLR to your browser, allowing it to execute your DLL. A faster, and more size efficient approach would be to use Ahead of Time Compilation (AOT) as described in this article. This would allow the compiler to strip out any library functions that are not used, giving a highly optimised output.
The features of the WebAssembly runtime itself are quite limited, future version will add garbage collection and various other capabilities that Blazor will be able to use directly. At the moment mono.wasm includes its own garbage collector.
The Blazor project itself has a number of open issues describing various optimisations which are being actively worked on. It already performs tree-shaking and various other optimisations, but this type of work takes time.

Currently (2021), a hello world Blazor WASM application (Visual Studio project template) downloads over 17 MB of data. When gzip is used, this got reduced to 7 MB - which is really huge if we think about the fact that no application code/logic is included yet!
But I found out that it seems the linker was not active during debugging. If we publish the application in release mode (-c Release switch), only necessary files were loaded. This increases the transfer size to 5.6 MB or even 2.4 MB with gzip activated. You can also see this in the size of the published folder:
$ dotnet publish --output publish_debug -c Debug
$ dotnet publish --output publish_release -c Release
$ du -hs publish_debug/
30M publish_debug/
$ du -hs publish_release/
11M publish_release/
It's still a noticeable amount of data. However, this information may help others finding this questions because of the much larger 17/7 MB shown in debug mode.
Since the question is from 2018, it may be also interested to mention that framework caching was improved in 3.2.0-preview2. This means: The runtime and framework are stored in the browser cache after fetching them initially from the server. Since this is handled by JavaScript, no further requests are made to this files after they got cached! The server may would respond with 304 Not Modified, but it's still some overhead which we haven't any more now.
This also means that they only appear on the first page load in the network tab! If you want to measure the loading time without cache, delete the cache for those domain. This has to be done manually! Checking the no cache checkbox in the browser console is not enough, since it seems that Blazor uses the local storage with JS.

Related

testcontainers-python hanging while showing "waiting to be ready...", then fails

I'm running my unit testing code for neo4j.
My environment:
Ubuntu 20.04LTS server
1Gb Memory
1CPU
Here is what is displayed in the console:
====================================== test session starts ======================================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: ~/morsvq, configfile: pytest.ini
plugins: mock-3.8.2
collected 2 items
---------------------------------------- live log setup -----------------------------------------
INFO testcontainers.core.container:container.py:52 Pulling image neo4j:latest
INFO testcontainers.core.container:container.py:63 Container started: ad7963ed01
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
ERROR neo4j:__init__.py:571 Failed to read from defunct connection IPv4Address(('localhost', 49153)) (IPv4Address(('127.0.0.1', 49153)))
The same code runs successfully on a faster virtual machine with 8Gb Memory. So the code itself shouldn't be faulty. My suspision is that there is something to do with my configuration, so that it now consumes to much memory?
I've checked the official websites' documentation, but it doesn't mention the memory problem. I wonder if someone has encountered similar problem? How to fix this?
Disclaimer: I am a maintainer of tc-java, so I have only some basic experience with tc-python. However, some facts and constraints are universal across Testcontainers language implementations.
As you already wrote, the code runs fine on a more powerful machine, while it fails on an extremely limited machine. 1GB of RAM is not much, I would expect it is generally not enough to successfully start a Neo4j Docker container without memory swapping. Swapping would make the startup and interactions very slow, hence the startup timeout triggers.
For further debugging, you can run the Neo4j container directly using Docker CLI on your environment and see how it behaves.

Is there an updated disk image binary for the x86 architecture for running gem5 in full system mode?

I am currently using the linux-x86.img which I downloaded from the documentation page for gem5 (http://www.m5sim.org/Download), but since I was not able to compile the fscanf and fopen commands on this image I was wondering if there is a more recent image which I could download and use instead.
The error message throw when trying to compile the lines with fopen and fscanf are
./obj/edgelist.o: In function loadEdgeArray': edgelist.c:(.text+0x148): undefined reference to __isoc99_fscanf'
./obj/edgelist.o: In function loadEdgeArrayInfo': edgelist.c:(.text+0x20c): undefined reference to __isoc99_fscanf'
collect2: ld returned 1 exit status
make: *** [test] Error 1
This error is thrown when trying to compile from both from qemu as well as gem5.
Here's one setup that generates such an image with Buildroot. I'm a fan of Buildroot because it builds everything from source. I don't understand how fscanf and fopen could fail in that image, but I have tested them in the above setup and they work fine.
Boot used to work in the past, but gem5 X86 full system boot has been broken for likely easy to fix reasons for a few months now as of March 2020 on the gem5 side, although there are efforts in place to fix it, and so likely it will work again soon: https://www.gem5.org/project/2020/03/09/boot-tests.html
Other alternatives include:
https://gem5art.readthedocs.io/en/latest/ which Jason has been pushing and uses Packer to generate disk images
You can also extract working disk images from Docker: https://hub.docker.com/_/ubuntu This requires exporting them to a file to give to gem5.
It is also worth noting that when the gem5.org website migrated from the old Wiki to the new static website setup in Q1 2020, we lost the ability of doing directory listing under http://dist.gem5.org/dist/current/arm/ for some reason, and so devs were forced to list them one by one on the static website... https://www.gem5.org/documentation/general_docs/fullsystem/guest_binaries
I am not sure why the error is no longer occurring for me, but documenting the steps I went through which might have fixed something. I reinstalled Ubuntu18.04 therefore had to rebuild gem5 and I used the parsec image (http://www.cs.utexas.edu/~parsec_m5/x86root-parsec.img.bz2) referenced in this answer Booting gem5 X86 Ubuntu Full System Simulation

Strange apache behaviour when lauching an external binary called by a perl script

I am currently setting up a web service powered by apache and running on CENTOS 6.4.
This service uses perl scripts (cgi-bin) launching in particular external homemade fortran compiled binaries.
Here is the issue: when I boot my server, everything goes well except that one of my binary crashes systematically (with a kernel segfault) when called by my perl scripts.
If I restart manually the httpd service (at the command line: service httpd restart), the issue is totally fixed.
I examined apache/system logs and nothing suspicious can be found.
It appears that the problem occurs only when httpd is launched by /etc/rc[0-6].d startup directives. I tried to change the launch order of http (S85httpd by default) to any other position without success.
To summarize, my web service is only functional (with no external binary crash) when httpd is launched at the command line once the server has fully booted up!
[EDIT] This issue is now resolved:
My fortran binary handles very large arrays and complex functions requiring an unlimited stack size.
Despite that the stack size limit was defined on a system-wide basis (in /etc/security/limits.conf), for any reason it appears that the "apache/perl/fortran binary" ensemble was not aware of that (causing my binary to crash each time it was called).
At the contrary, when I manually restarted apache at the shell prompt, the stacksize limit was correctly passed (.bashrc with 'ulimit -S -s unlimited').
As a workaround, I used BSD::Resource module (http://metacpan.org/pod/BSD::Resource) to define stacksize directly in my perl script by using e.g. setrlimit(RLIMIT_STACK, $softlimit, $hardlimit);
Thus, this new stack size limit is now directly passed from my perl script to my binary.
I've run into similar problems before. Maybe one way to solve this is to put the binary on a 'delayed start', so that it starts after everything else on your system is running. One way to do this is to put an at job in your /etc/rc.local script, to start the binary in X minutes.

PHP error_log file

Can anyone help me with error_log file. If you already guessed that I am not experienced user, that is true :-)
I have VPS on CentOS 5 with 4 CPU and 768 memory. With 5 sites on it.
Problem I have is that, no meter on which site system generate file "error_log" in root of sites, and in any other folder where there is any php script, so after running some php script there is error_log in that folder.
On every access system writes new lines, and it is the same error message, in any error file just time in different.
This is part of error_log file:
[13-Mar-2012 06:52:18] PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20090626/eaccelerator.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20090626/eaccelerator.so: cannot open shared object file: No such file or directory in Unknown on line 0
[13-Mar-2012 06:52:20] PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20090626/eaccelerator.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20090626/eaccelerator.so: cannot open shared object file: No such file or directory in Unknown on line 0
If I am right it is something about eaccelerator or something. I tried to find what is that, and it is some caching mechanism, if I am right. So far I know I did not do anything with that.
My sites are using widely used static html cache, ones page is generated by php it is stored in text file on disk and after server from disk. Something like this: [http://www.theukwebdesigncompany.com/articles/php-caching.php][1]
Any help to find problem, and to fix it would be nice. Also if you have any question, do not hesitate to ask, I will try to help as much as I can. Again, I am not experienced with WHM, so If you ask me something please tell me where exactly to look for it :-).
Best regards.
You must install eAccelerator extension for php 5.3
you maybe updated to php 5.3 and forgot to install it (or cp auto update !)
eAccelerator is a free open source PHP accelerator, optimizer, encoder and
dynamic content cache for PHP. It increases performance of PHP scripts by
caching them in compiled state, so that the overhead of compiling is almost
completely eliminated. Also it uses some optimizations to speed up execution
of PHP scripts. eAccelerator typically reduces server load and increases the
speed of your PHP code by 1-10 times.
gogole about it to install or use this link http://www.dedicated-resources.com/guide/128/eAccelerator-for-PHP.html
if you can not do that or dont want to install ir, you can edit /usr/local/lib/php.ini and remove eaccelerator.so, pdo.so, pdo_sqlite.so, sqlite.so in extension section
or back to php 5.2
i recommend to you to install eAccelerator and boost your php performance ;)

Weblogic forces recompile of EJBs when migrating from 9.2.1 to 9.2.3

I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)