ilink64 Error: Fatal: Malloc of 65536 bytes failed - dll

I'm transfering project for C++ Builder 6 on Embarcadero RAD Studio 10.4 and change platform with x86 on x64. My project include five *.dll and two *.exe files.
I success transfer four *.dll on x64 platform, but fifth *.dll report about error. When *.dll - file make, I get message about error: "Out of memory". I visited url:
http://docwiki.embarcadero.com/RADStudio/Sydney/en/Handling_Out_of_Memory_Errors
and found out, that in my happening overflow three heap:
Code Heap Size
Dwarf str Heap Size
Info Heap Size
Little by little I enlarged size all heaps. But, soon I reached limitation: "[ilink64 Error] Fatal: Malloc of 65536 bytes failed in ........\bins\Win64\Debug\my_dll.ildw_str, line 6"
About this signaling Dwarf str Heap Size.
I visited url:
https://stackoverflow.com/a/37537734/9494441
and maked all tips.
I tryed:
Setted Large Address Aware flag with the lamarker tool
Replace ilink32.exe and ilink64.exe program RAD Studio with version 10.4 on version 10.2.3
Incremental linker disabled/enabled
Manually removing all files in /debug
all rebuild
add files to the antivirus exclusions ilink32.exe and ilink64.exe
Me didn't help nothing. How fix this problem? Thanks!

This is happening only in debug, right?
4 options you have:
Reduce the memory of the sections you are allocating more than you project needs. Not to allocate more memory than neede in the other sections.
Reduce all the symbols your modules are expose to. I've been told that the linker does not detect duplicates and same symbols get included again and again making the things worse. If the project is old and have a poor include policy you might have something to work on there
Compile all in release and enable debug info only on the modules you are going to debug.
Try to use 10.3.1 where C++ win64 was not yet C++17. In 10.3.2 Win64 was upgraded to C++17 making the linker problem more likely to happen.

Related

Is there an updated disk image binary for the x86 architecture for running gem5 in full system mode?

I am currently using the linux-x86.img which I downloaded from the documentation page for gem5 (http://www.m5sim.org/Download), but since I was not able to compile the fscanf and fopen commands on this image I was wondering if there is a more recent image which I could download and use instead.
The error message throw when trying to compile the lines with fopen and fscanf are
./obj/edgelist.o: In function loadEdgeArray': edgelist.c:(.text+0x148): undefined reference to __isoc99_fscanf'
./obj/edgelist.o: In function loadEdgeArrayInfo': edgelist.c:(.text+0x20c): undefined reference to __isoc99_fscanf'
collect2: ld returned 1 exit status
make: *** [test] Error 1
This error is thrown when trying to compile from both from qemu as well as gem5.
Here's one setup that generates such an image with Buildroot. I'm a fan of Buildroot because it builds everything from source. I don't understand how fscanf and fopen could fail in that image, but I have tested them in the above setup and they work fine.
Boot used to work in the past, but gem5 X86 full system boot has been broken for likely easy to fix reasons for a few months now as of March 2020 on the gem5 side, although there are efforts in place to fix it, and so likely it will work again soon: https://www.gem5.org/project/2020/03/09/boot-tests.html
Other alternatives include:
https://gem5art.readthedocs.io/en/latest/ which Jason has been pushing and uses Packer to generate disk images
You can also extract working disk images from Docker: https://hub.docker.com/_/ubuntu This requires exporting them to a file to give to gem5.
It is also worth noting that when the gem5.org website migrated from the old Wiki to the new static website setup in Q1 2020, we lost the ability of doing directory listing under http://dist.gem5.org/dist/current/arm/ for some reason, and so devs were forced to list them one by one on the static website... https://www.gem5.org/documentation/general_docs/fullsystem/guest_binaries
I am not sure why the error is no longer occurring for me, but documenting the steps I went through which might have fixed something. I reinstalled Ubuntu18.04 therefore had to rebuild gem5 and I used the parsec image (http://www.cs.utexas.edu/~parsec_m5/x86root-parsec.img.bz2) referenced in this answer Booting gem5 X86 Ubuntu Full System Simulation

I am getting this error message while trying to use Testng in Eclipse?

I am getting the error message such as stack overflow, heap memory error and similar messages after trying to use TestNG. And after installing TestNg the Eclipse feels heavy and became very slow to respond. and throwing this error message.
An error of such kind Stackoverflow and heap memory error occurs because of physical lack of resources such as lower Ram or slower processor. So the only solution to this was to allocate more memory to eclipse IDE. you can allocate more memory to eclipse by finding the eclipse.ini file in your directory where you have installed it. After finding the file, the file should be in notepad. open the file in notepad and edit the memory allocated. there are two things you need to change. Xms and XMX, which is minimum and max memory. I made mine from 256m to 512m for XMS and from 1024m to 2048M. But make sure you allocate only the memory which is spare. otherwise your PC might crash. Hope this helps.

Error initializing JVM

I am getting "Could not reserve enough space for object heap" error when I am trying to start hybris server.
I have set
wrapper.java.additional.1=-Xmx1G
wrapper.java.additional.2=-XX:MaxPermSize=1024M
My machine is 64 bit 8GB RAM Windows
I faced the same problem once, The problem in my case was that too many other Applications were running on my system.
So go to the task manager and check the memory available use.
Close some applications and try running.
Also if you are using eclipse Then, In your eclipse.ini file (this is beside the eclipse executable), replace -Xmx256m with -Xmx1024m (or Xmx512m).
This is not compulsory but in certain cases it works.
If you are using some extension then,
Open YOURPATH/config/local.properties file.
Add the following entry:
config/local.properties
build.parallel=true
Save the file.
(In cases where we have multiple cores in the machine, we can tell hybris to utilize these by building in parallel and in certain cases this too works)
I too faced the same probelm. I followed below steps and set the max heap size to 1GB.
Add the following content to local.properties
tomcat.generaloptions=-Xmx4G -ea -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dorg.tanukisoftware.wrapper.WrapperManager.mbean=true -Djava.endorsed.dirs="%CATALINA_HOME%/lib/endorsed" -Dcatalina.base=%CATALINA_BASE% -Dcatalina.home=%CATALINA_HOME% -Dfile.encoding=UTF-8 -Dlog4j.configuration=log4j_init_tomcat.properties -Djava.util.logging.config.file=jdk_logging.properties -Djava.io.tmpdir="${HYBRIS_TEMP_DIR}"
ant clean all
start hybrisserver
Reference
https://launchpad.support.sap.com/#/notes/0002437669

CPU killed by SIGXCPU using OpenCL and mono

I have got very similar problem to this one stated here : Intel CPU OpenCL in Mono killed by SIGXCPU (Ubuntu)
Essentially, I have a very simple C# application using OpenCL (through OpenCL.Net wrapper, but it shouldn't make a difference as it is merely wrapping native functions and nothing more). In the code I just build kernel and then allocate a big array of floats.
To be more specific my platform: It is Ubuntu 12.04, OpenCL 1.1 (with CUDA) and mono 3.0.3.
Problem: When running my code through mono i get CPU LIMIT EXCEEDED error
Few things:
If I set a breakpoint (in monodevelop) somewhere between building the kernel and allocation it works..
Changing array size to small one also makes it work
Strace doesn't show anything useful. I tried also passing a callback to ClBuildProgram (to note: if I comment out line with ClBuildProgram it works).
Any ideas?
That's what worked for me in the end.
There is a major problem with mono - it uses SIGXCPU for GC handling (which is strange btw). Unfortunately OpenCL uses it as well so it conflicts.
Workaround is to modify mono code.
Go to source directory and grep -r SIGXCPU . In my mono (3.0.3) there were 2 imporant files
./libgc/pthread_stop_world.c:# define SIG_THR_RESTART SIGXCPU
./mono/metadata/sgen-os-posix.c:const static int restart_signal_num = SIGXCPU;
Replace SIGXCPU with SIGWINCH and recompile. One note is that I am not sure if it didn't break something, but for now looks OK and OpenCL problem is gone. If it breaks something (like gui) replace SIGWINCH with different signal that you have (signals.h for signals defs)

JNI - compile dll as 64 bit

I compile my .dll with the following command: gcc -mno-cygwin -I"/cygdrive/c/Program Files/Java/jdk1.7.0_04/include" -I"/cygdrive/c/Program Files/Java/jdk1.7.0_04/include/win32" -Wl,--add-stdcall-alias -shared -o CalculatorFunctions.dll CalcFunc.c
I use GlassFish for Eclipse. The whole system is a CORBA client-server. When I start the server from Eclipse - it's fine. But when I try to run the server from the CMD (because I want to set a port and host address for the server) it gives me: Exception: ... .dll: Can't load AI 32-bit .dll on a AMD 64-bit platform
I searched through other topics and saw that I should try with changing my JDK to 32 bit - didn't work again.
So the other solution I read about is to compile the .DLL as 64 bit. What command I need to use or how I do that at all ?
Thanks in advance! :)
You need not only a command but whole 64-bit MinGW toolchain - a 64bit compiler in the first place. Then the parameters to your gcc invocation should work the same.
Beware that 64bit is not just a matter of compilability. Primitive data types have different sizes, so any code making assumptions without sizeof checking is a potential issue. Most prominently, pointer arithmetic.