valgrind on mips - valgrind

I have been trying to run valgrind on a MIPS machine.
I successfully cross compiled valgrind and ran a few test from the test suite.
But whenever valgrind tries to create a coredump, an assertion fails.
Its from the file coredump-elf.c
vg_assert(sizeof(*regs) == sizeof(prs->pr_reg));
well apparently this assertion checks if the size of the byte array is same as the struct of registers made by the valgrind.
But i am not able to get past this error.
I am using valgrind on MIPS 32 machine.
thanks

Trunk Valgrind is well supported for MIPS32 LE/BE and MIPS64 LE/BE.
Download the code from the trunk:
svn co svn://svn.valgrind.org/valgrind/trunk valgrind
configure it, make it, and use it. You should not see any MIPS32 issues.

Related

Is there an updated disk image binary for the x86 architecture for running gem5 in full system mode?

I am currently using the linux-x86.img which I downloaded from the documentation page for gem5 (http://www.m5sim.org/Download), but since I was not able to compile the fscanf and fopen commands on this image I was wondering if there is a more recent image which I could download and use instead.
The error message throw when trying to compile the lines with fopen and fscanf are
./obj/edgelist.o: In function loadEdgeArray': edgelist.c:(.text+0x148): undefined reference to __isoc99_fscanf'
./obj/edgelist.o: In function loadEdgeArrayInfo': edgelist.c:(.text+0x20c): undefined reference to __isoc99_fscanf'
collect2: ld returned 1 exit status
make: *** [test] Error 1
This error is thrown when trying to compile from both from qemu as well as gem5.
Here's one setup that generates such an image with Buildroot. I'm a fan of Buildroot because it builds everything from source. I don't understand how fscanf and fopen could fail in that image, but I have tested them in the above setup and they work fine.
Boot used to work in the past, but gem5 X86 full system boot has been broken for likely easy to fix reasons for a few months now as of March 2020 on the gem5 side, although there are efforts in place to fix it, and so likely it will work again soon: https://www.gem5.org/project/2020/03/09/boot-tests.html
Other alternatives include:
https://gem5art.readthedocs.io/en/latest/ which Jason has been pushing and uses Packer to generate disk images
You can also extract working disk images from Docker: https://hub.docker.com/_/ubuntu This requires exporting them to a file to give to gem5.
It is also worth noting that when the gem5.org website migrated from the old Wiki to the new static website setup in Q1 2020, we lost the ability of doing directory listing under http://dist.gem5.org/dist/current/arm/ for some reason, and so devs were forced to list them one by one on the static website... https://www.gem5.org/documentation/general_docs/fullsystem/guest_binaries
I am not sure why the error is no longer occurring for me, but documenting the steps I went through which might have fixed something. I reinstalled Ubuntu18.04 therefore had to rebuild gem5 and I used the parsec image (http://www.cs.utexas.edu/~parsec_m5/x86root-parsec.img.bz2) referenced in this answer Booting gem5 X86 Ubuntu Full System Simulation

RISCV tests seem to have no code in start_pc

I've cloned this repo and built tests using toolchain.
But faced some issues:
riscv-qemu faults running these tests (segfault or memory allocation fault). Followed the instruction on your site
starting pc in the test header (ex., rv32(64)mi-p-csr) is set to 80002000, but there is no code in this place (section .tohost)
Can you explain me how to fix this?
Currently on RISCV v1.9 as latest avaliable

CPU killed by SIGXCPU using OpenCL and mono

I have got very similar problem to this one stated here : Intel CPU OpenCL in Mono killed by SIGXCPU (Ubuntu)
Essentially, I have a very simple C# application using OpenCL (through OpenCL.Net wrapper, but it shouldn't make a difference as it is merely wrapping native functions and nothing more). In the code I just build kernel and then allocate a big array of floats.
To be more specific my platform: It is Ubuntu 12.04, OpenCL 1.1 (with CUDA) and mono 3.0.3.
Problem: When running my code through mono i get CPU LIMIT EXCEEDED error
Few things:
If I set a breakpoint (in monodevelop) somewhere between building the kernel and allocation it works..
Changing array size to small one also makes it work
Strace doesn't show anything useful. I tried also passing a callback to ClBuildProgram (to note: if I comment out line with ClBuildProgram it works).
Any ideas?
That's what worked for me in the end.
There is a major problem with mono - it uses SIGXCPU for GC handling (which is strange btw). Unfortunately OpenCL uses it as well so it conflicts.
Workaround is to modify mono code.
Go to source directory and grep -r SIGXCPU . In my mono (3.0.3) there were 2 imporant files
./libgc/pthread_stop_world.c:# define SIG_THR_RESTART SIGXCPU
./mono/metadata/sgen-os-posix.c:const static int restart_signal_num = SIGXCPU;
Replace SIGXCPU with SIGWINCH and recompile. One note is that I am not sure if it didn't break something, but for now looks OK and OpenCL problem is gone. If it breaks something (like gui) replace SIGWINCH with different signal that you have (signals.h for signals defs)

JNI - compile dll as 64 bit

I compile my .dll with the following command: gcc -mno-cygwin -I"/cygdrive/c/Program Files/Java/jdk1.7.0_04/include" -I"/cygdrive/c/Program Files/Java/jdk1.7.0_04/include/win32" -Wl,--add-stdcall-alias -shared -o CalculatorFunctions.dll CalcFunc.c
I use GlassFish for Eclipse. The whole system is a CORBA client-server. When I start the server from Eclipse - it's fine. But when I try to run the server from the CMD (because I want to set a port and host address for the server) it gives me: Exception: ... .dll: Can't load AI 32-bit .dll on a AMD 64-bit platform
I searched through other topics and saw that I should try with changing my JDK to 32 bit - didn't work again.
So the other solution I read about is to compile the .DLL as 64 bit. What command I need to use or how I do that at all ?
Thanks in advance! :)
You need not only a command but whole 64-bit MinGW toolchain - a 64bit compiler in the first place. Then the parameters to your gcc invocation should work the same.
Beware that 64bit is not just a matter of compilability. Primitive data types have different sizes, so any code making assumptions without sizeof checking is a potential issue. Most prominently, pointer arithmetic.

Why would a native program run fine when executed directly, but fail with a seg fault when submitted through condor

I have a third party library that I'm attempting to incorporate into a simulation. We have the static library (.a), along with all of it's runtime dependencies (shared objects). I've created a very simple application (in C) that is linked against the library. All it does is call an initialization function that is part of the third party library's API, and exits. When I run this directly from the command line, it works fine. If I submit the executable to our Condor grid, it fails with a seg fault on strncpy (libc.so.6). I've forced condor to only run the executable on a particular machine, and if I run it directly on that machine, it works fine.
I'm mostly a Java programmer... limited amount of native coding experience. I'm familiar with tools such as nm, ldd, catchsegv, etc... to the point where I can run them. I don't really know where to start looking for an issue though.
I've run ldd directly on the executing machine, and via a script submitted through condor, along with my executable. ldd reports the same files in both cases.
I don't understand how running it directly would work, but it would fail being run by condor. The process that ultimately executes the program, condor_startd, is a process that starts as root, and changes its effective uid to the submitter. Perhaps this has something to do with it?
Don't know why this would cause an issue, but the culprit was the LANG environment variable. It was not set when running under Condor, but was set to US_EN.UTF-8 when running locally. Adding this value to the condor execution environment fixed the problem.