How do i set up instruction & data memory address when using "riscv32-unknown-elf-gcc"? - elf

I designed RISCV32IM processor, and I used "riscv32-unknown-elf-gcc" to generate code for testing.
However, the PC(instruction memory address) value and data memory address of the generated code had arbitrary values. I used this command:
riscv32-unknown-elf-gcc -march=rv32im -mabi=ilp32 -nostartfiles test.c
Can I know if I can set the instruction and data memory address I want?
Thanks.
Thank you for answer.
I designed only HW, and this is my first time using the SW tool chain.
Even if my question is rudimentary, please understand.
The figure is the result of the "-v" option.
enter image description here
I can't modify the script file because I use riscv tool chain in DOCKER environment.
So, I tried to copy the script file (elf32lriscv.x), modify it.
I modified it to 0x10000 ==> 0x00000.
The file name of the copied script is "test5.x".
And it was executed as follows.
What am I doing wrong?
enter image description here

The riscv compiler is using the default linker script to place text and date section... .
If you add -v option to your command line riscv32-unknown-elf-gcc -v -march=rv32im -mabi=ilp32 -nostartfiles test.c, you will see the linker script used by collect 2 ( normally it will be -melf32lriscv . you can find the linker script in ${path_to_toolchain}/riscv32-unknown-elf/lib/ldscripts/ (the default one is .x).
You can also use riscv32-unknown-elf-ld --verbose like explained by #Frant. However , you need to be careful if the toolchain was compiled with enable multilib and you compile for rv64 but the default is rv32 or vice versa. It is not the case probably, but to be sure you can specify the arch with -A elf32riscv for an rv32.
To Set the addresses you can create your own linker script or copy and modify the default one. You can only modify the executable start like explained by #Frant or make more modification and place whatever you want where you want.
Once your own linker script ready you can pass it to the linker with -Wl,-T,${own_linker_script }. you command will be riscv32-unknown-elf-gcc -march=rv32im -mabi=ilp32 -nostartfiles test.c -Wl,-T,${own_linker_script }

Related

How to run post build commands in meson?

How can I do in meson to run a command after building a target?
Eg. I have an executable:
executable('target.elf', 'source1.c', 'source2.c')
And after target.elf built I want to execute a command (eg. chmod -x target.elf) on it.
I tried custom_target(), but that requires an output. I don't have new output, I just have target.elf. I tried run_command() but I didn't know how to execute it after the building.
executable now has an argument install_mode (added 0.47.0) to specify the file mode in symbolic format and optionally the owner/uid and group/gid for the installed files.
I just noticed that yasushi-shoji has provided this answer already.
The following code should do.
project('tutorial', 'c')
exec = executable('target.elf', 'main.c', build_by_default : false)
custom_target('final binary',
depends : exec,
input : exec,
output : 'fake',
command : ['chmod', '+x', '#INPUT#'],
build_by_default : true)
Note that because I want to always run the fake target, I'm using custom_target(). However, the command chmod + x demo doesn't generate the file fake specified in custom_target(), successive ninja command will always run the target.
If you don't want this behaviour, there are two ways:
You can write a script which chmod the target.elf and then copies it to target, thus effectively creates the target file. Make sure to change the output file in the meson.build if you do so.
If you don't mind typing ninja chmod instead of ninja, you can use run_target().
# optional
run_target('chmod',
command : ['chmod', '+x', exec])
Another alternative is to use install_mode for executable().
Also note that you should always use find_program() instead of plain chmod. This example doesn't use it for simplicity.

How to check if a full path executable is correct in autoconf

I am writing a macro to check for cython on the system my program is about to be compiled.
i can use AC_PATH_PROG all right to find cython when it is in the path, but if the user want to specifiy it in the configure line like this:
./configure CYTHON=/home/user/cythonFoo
I just can't find the right way to check for it.
This is not working, it always pass the test whatever the value of CYTHON is:
AC_PATH_PROG( CYTHON, $CYTHON,"" )
This is kinda working, but not really usable, because it would require me to extract filename and filepath beforehand:
AC_PATH_PROG( CYTHON, cythonFoo,"", /home/user/ )
So i've wrote my own test, but i think there may be a standard way to do it
AC_MSG_CHECKING([Checking Cython path $CYTHON is correct])
AS_IF($CYTHON -V > /dev/null 2>&1, , CYTHON="")
if test -z $CYTHON; then
AC_MSG_RESULT([ no ])
else
AC_MSG_RESULT([ yes ])
fi
You're observing the expected behavior of AC_PATH_PROG. If the user sets CYTHON, AC_PATH_PROG is going to treat it as the cython to use, even if it's bogus. As the first line of the linked page states
If you need to check the behavior of a program as well as find out whether it is present, you have to write your own test for it
So what you've done is the "standard way".

How to use the program's exit status at compile time?

This question is subsequent to my previous one: How to integrate such kind of source generator into CMake build chain?
Currently, the C source file is generated from XS in this way:
set_source_files_properties(${CMAKE_CURRENT_BINARY_DIR}/${file_src_by_xs} PROPERTIES GENERATED 1)
add_custom_target(${file_src_by_xs}
COMMAND ${XSUBPP_EXECUTABLE} ${XSUBPP_EXTRA_OPTIONS} ${lang_args} ${typemap_args} ${file_xs} >${CMAKE_CURRENT_BINARY_DIR}/${file_src_by_xs}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
DEPENDS ${file_xs} ${files_xsh} ${_XSUBPP_TYPEMAP_FILES}
COMMENT "generating source from XS file ${file_xs}"
)
The GENERATED property let cmake don't check the existence of this source file at configure time, and add_custom_target let the xsubpp always re-run at each compile. The reason for always rerun is because xsubpp will generate an incomplete source file even if it fails, so there are possibility that the whole compiling continues with an incomplete source file.
I found it is time consuming to always re-run source generator and recompile it. So I want to have it re-run only when dependent XS files are modified. However, if I do so, the incomplete generated source file must be deleted.
So my question is: is there any way to remove the generated file, only when the program exit abnormally at compile time?
Or more generic: is there any way to run a command depending on another command's exit status at compile time?
You can always write a wrapper script in your favorite language, e.g. Perl or Ruby, that runs xsubpp and deletes the output file if the command failed. That way you can be sure that if it exists, it is correct.
In addition, I would suggest that you use the OUTPUT keyword of add_custom_command to tell CMake that the file is a result of executing the command. (And, if you do that, you don't have to set the GENERATED property manually.)
Inspired by #Lindydancer's answer, I achieved the purpose by multiple COMMANDs in one target, and it don't need to write an external wrapper script.
set(source_file_ok ${source_file}.ok)
add_custom_command(
OUTPUT ${source_file} ${source_file_ok}
DEPENDS ${xs_file} ${xsh_files}
COMMAND rm -f ${source_file_ok}
COMMAND xsubpp ...... >${source_file}
COMMAND touch ${source_file_ok}
)
add_library(${xs_lib} ${source_file})
add_dependencies(${xs_lib} ${source_file} ${source_file_ok})
The custom target has 3 commands. The OK file only exists when xsubpp is success, and this file is added as a dependency of the library. When xsubpp is not success, the dependency on the OK file will force the custom command to be run again.
The only flaw is cross-platform: not all OS have touch and rm, so the name of these two commands should be decided according to OS type.

How to add a user defined function in QDB Library?

QDB is a database provided by QNX Neutrino package. I went through the QDB documentation to add a user defined SQL function: http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.qdb_en_dev_guide/writing_functions.html?cp=2_0_8
I created a source file which had my user define SQL function written in C and qdb_function structure definition. I built it with a make file to create libudf.so.
As suggested by QDB I added Function = udftag#libudf.so in the qdb.cfg. But while running the qdb in the shell prompt, it is giving the error (in bold):
qdb -I basic -V -R set -v -c /etc/sql/qdb.cfg -s de_DE#cldr -o tempstore=/fs/tmpfs
QDB: No script registered for handling corrupt database.
qdb: processing [TempMainAddressBook]Function - Can't access shared library
and qdb is getting exited immediately.
I have tried following things:
made sure sqlite3 library is added in the make file
source code is in strictly in C by using directive : extern "C" to avoid name mangling as the file extension is .cpp. I also tried with .c extension.
given the absolute path of the libudf.so in qdb.cfg as : Function = udftag#/usr/lib/libudf.so
qdb_funcion struct is properly defined in library's source code only.
tried without using the static declaration of function(mentioned in the qdb docs)
After trying all hits and trials also, I am getting the same error every time which is Can't access shared library
If any one has any idea to resolve this error please share.
Suggestion 1: run qdb by setting LD_DEBUG=1, like in:
LD_DEBUG=1 qdb command line options
This will output a lot of debug information from the dynamic loader as it attempts to locate and then load the .so files. Check what is the path that it output before the "Can't access" message is displayed.
Suggestion 2: obvious but make sure that the permissions are OK for the .so file. Do you have the execution permission set?
Suggestion 3: check if the error message is identical if you completely remove the .so file from the system
Suggestion 4: increase the number of lower-case 'v'-s. QDB likely supports more, with progressively more verbose information provided as you increase the numbers (6 should be enough for full verbosity)

cmake rejects certain cflags

I have here a project - though I believe it's independent of the package used - that, when configured with
cmake -DCMAKE_C_FLAGS_RELEASE="-O2 -msse"
uses those exact flags. However, as soon as I use
cmake -DCMAKE_C_FLAGS_RELEASE="-O2 -msse -fmessage-length=0"
cmake goes to stubborn state and ignores my desired flags, instead defaulting to the project's defaults. This is even reflected in CMakeCache.txt, though I do not know what to make of it.
CMakeCache.txt:CMAKE_C_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
CMakeCache.txt:CMAKE_C_FLAGS_RELEASE=-O2 -msse -fmessage-length:UNINITIALIZED=0
The question on the table is — how do I get my flags used?
This is a known bug in the command line parsing in CMake. It's getting confused with the extra = sign and thinks the variable name is CMAKE_C_FLAGS_RELEASE=-O2 -msse -fmessage-length with the value 0!
One way to get the option in the cache in the correct format is to use the cache editor. After running cmake initially, run make edit_cache then press t to toggle advanced options, Ctrl-n down to the CMAKE_C_FLAGS_RELEASE option, hit Enter to edit it and type in the value you want. After that type c then g to configure and generate the Makefiles.
Alternatively, just edit the cache with your $EDITOR and enter the correct line:
CMAKE_C_FLAGS_RELEASE:STRING=-O2 -msse -fmessage-length=0
This isn't very elegant, but it should get you motoring.
BTW, the type declaration also works from the command line, e.g.:
cmake -DCMAKE_C_FLAGS_RELEASE:STRING="-O2 -msse -fmessage-length=0"
should work. Still kind of awkward though.