Execute root commands to link before starting execution - root-framework

Is there a way to tell root to execute (say) the following commands at the start?
.L /usr/lib/libgsl.so
.L /usr/lib/libgslcblas.so
I find it convenient as I have to execute this every time I start root. My .C file has uses these libraries.
I found an option -e but I cannot use it for more than one line of commands.

Sure, just add the following into a ~/.rootlogon.C (or create one in case you don't have it):
{
// old content here
gROOT->ProcessLine(".L /usr/lib/libgsl.so");
gROOT->ProcessLine(".L /usr/lib/libgslcblas.so");
}

Related

How to run post build commands in meson?

How can I do in meson to run a command after building a target?
Eg. I have an executable:
executable('target.elf', 'source1.c', 'source2.c')
And after target.elf built I want to execute a command (eg. chmod -x target.elf) on it.
I tried custom_target(), but that requires an output. I don't have new output, I just have target.elf. I tried run_command() but I didn't know how to execute it after the building.
executable now has an argument install_mode (added 0.47.0) to specify the file mode in symbolic format and optionally the owner/uid and group/gid for the installed files.
I just noticed that yasushi-shoji has provided this answer already.
The following code should do.
project('tutorial', 'c')
exec = executable('target.elf', 'main.c', build_by_default : false)
custom_target('final binary',
depends : exec,
input : exec,
output : 'fake',
command : ['chmod', '+x', '#INPUT#'],
build_by_default : true)
Note that because I want to always run the fake target, I'm using custom_target(). However, the command chmod + x demo doesn't generate the file fake specified in custom_target(), successive ninja command will always run the target.
If you don't want this behaviour, there are two ways:
You can write a script which chmod the target.elf and then copies it to target, thus effectively creates the target file. Make sure to change the output file in the meson.build if you do so.
If you don't mind typing ninja chmod instead of ninja, you can use run_target().
# optional
run_target('chmod',
command : ['chmod', '+x', exec])
Another alternative is to use install_mode for executable().
Also note that you should always use find_program() instead of plain chmod. This example doesn't use it for simplicity.

IntelliJ: Dynamically updated file header

By default, IntelliJ Idea will insert (something like) the following as the header of a new source file:
/**
* Created by JohnDoe on 2016-04-27.
*/
The corresponding template is:
/**
* Created by ${USER} on ${DATE}.
*/
Is it possible to update this template so that it inserts the last date of modification when the file is changed? For example:
/**
* Created by JohnDoe on 2016-03-27.
* Last modified by JaneDoe on 2016-04-27
*/
It is not supported out of the box. I suggest you do not include information about author and last edit/create time in file at all.
The reason is that your version control system (Git, SVN) contains the same information automatically. So the manual labelling is just duplicate of already existing info, but is only more error prone and needs to be manually updated.
Here's a working solution similar to what I'm using. Tested on mac os.
Create a bash script which will replace first occurrence of Last modified by JaneDoe on $DATE only if the exact value is not contained in the file:
#!/bin/bash
FILE=src/java/test/Test.java
DATE=`date '+%Y-%m-%d'`
PREFIX="Last modified by JaneDoe on "
STRING="$PREFIX.*$"
SUBSTITUTE="$PREFIX$DATE"
if ! grep -q "$SUBSTITUTE" "$FILE"; then
sed -i '' "1,/$(echo "$STRING")/ s/$(echo "$STRING")/$(echo "$SUBSTITUTE")/" $FILE
fi
Install File Watchers plugin.
Create a file watcher with appropriate scope (it may be this single file or any other scope, so that any change in project's source code will update modified date or version etc.) and put a path to your bash script into Program field.
Now every time the file changes the date will update. If you want to update date for each file separately, an argument $FilePath$ should be passed to the script.
This might have been just a comment to #oleg-mikhailov excellent idea, but the code snippet won't fit. Basically, I just tweaked his solution.
I needed a slightly different syntax but that's not the issue. The issue was that when the script ran automatically upon file save using the File Watchers plugin, if ran on a file which doesn't include PREFIX it would run over and over for ever.
I presume the that the issue is with the plugin itself, as it didn't happen when run from the shell, but I'm not sure why it happened.
Anyway, I ended up running the following script (as I said only a slight change with respect to the original). The new script also raises an error if the the prefix doesn't exist. For me this is a feature as Pycharm prompts me with the error, and I can fix the file.
Tested with PyCharm 2021.2.3 on macOS 11.6.
#!/bin/bash
FILE=$1
DATE=`date '+%Y-%m-%d'`
PREFIX="last_modified_date: "
STRING="$PREFIX.*$"
SUBSTITUTE="$PREFIX$DATE"
if ! grep -q "$SUBSTITUTE" "$FILE"; then
if grep -q "$PREFIX" "$FILE"; then
sed -i '' "s/$(echo "$STRING")/$(echo "$SUBSTITUTE")/" $FILE
else
echo "Error!"
echo "'$PREFIX' doesn't appear in $FILE"
exit 1
fi
fi
PHPStorm has not a "hook" for launching task after detect a change in file (just for uploading in server yes). Code templating is based on the creation of file not change.
The behaviour you want (automatic change file after manual change file) can be useful for lot of things but it's circular headhache for editor. Because if you change a file it must change file (and if a file is change ? it change file ?).
However, You can, perhaps, "enable Live Templates" when you launch a "reformat code" which able to rewrite your begin template code that way rewrite date modification.
Other solution is that use a tools with as grunt but I don't know if manage php file.

How to use the program's exit status at compile time?

This question is subsequent to my previous one: How to integrate such kind of source generator into CMake build chain?
Currently, the C source file is generated from XS in this way:
set_source_files_properties(${CMAKE_CURRENT_BINARY_DIR}/${file_src_by_xs} PROPERTIES GENERATED 1)
add_custom_target(${file_src_by_xs}
COMMAND ${XSUBPP_EXECUTABLE} ${XSUBPP_EXTRA_OPTIONS} ${lang_args} ${typemap_args} ${file_xs} >${CMAKE_CURRENT_BINARY_DIR}/${file_src_by_xs}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
DEPENDS ${file_xs} ${files_xsh} ${_XSUBPP_TYPEMAP_FILES}
COMMENT "generating source from XS file ${file_xs}"
)
The GENERATED property let cmake don't check the existence of this source file at configure time, and add_custom_target let the xsubpp always re-run at each compile. The reason for always rerun is because xsubpp will generate an incomplete source file even if it fails, so there are possibility that the whole compiling continues with an incomplete source file.
I found it is time consuming to always re-run source generator and recompile it. So I want to have it re-run only when dependent XS files are modified. However, if I do so, the incomplete generated source file must be deleted.
So my question is: is there any way to remove the generated file, only when the program exit abnormally at compile time?
Or more generic: is there any way to run a command depending on another command's exit status at compile time?
You can always write a wrapper script in your favorite language, e.g. Perl or Ruby, that runs xsubpp and deletes the output file if the command failed. That way you can be sure that if it exists, it is correct.
In addition, I would suggest that you use the OUTPUT keyword of add_custom_command to tell CMake that the file is a result of executing the command. (And, if you do that, you don't have to set the GENERATED property manually.)
Inspired by #Lindydancer's answer, I achieved the purpose by multiple COMMANDs in one target, and it don't need to write an external wrapper script.
set(source_file_ok ${source_file}.ok)
add_custom_command(
OUTPUT ${source_file} ${source_file_ok}
DEPENDS ${xs_file} ${xsh_files}
COMMAND rm -f ${source_file_ok}
COMMAND xsubpp ...... >${source_file}
COMMAND touch ${source_file_ok}
)
add_library(${xs_lib} ${source_file})
add_dependencies(${xs_lib} ${source_file} ${source_file_ok})
The custom target has 3 commands. The OK file only exists when xsubpp is success, and this file is added as a dependency of the library. When xsubpp is not success, the dependency on the OK file will force the custom command to be run again.
The only flaw is cross-platform: not all OS have touch and rm, so the name of these two commands should be decided according to OS type.

How to add a user defined function in QDB Library?

QDB is a database provided by QNX Neutrino package. I went through the QDB documentation to add a user defined SQL function: http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.qdb_en_dev_guide/writing_functions.html?cp=2_0_8
I created a source file which had my user define SQL function written in C and qdb_function structure definition. I built it with a make file to create libudf.so.
As suggested by QDB I added Function = udftag#libudf.so in the qdb.cfg. But while running the qdb in the shell prompt, it is giving the error (in bold):
qdb -I basic -V -R set -v -c /etc/sql/qdb.cfg -s de_DE#cldr -o tempstore=/fs/tmpfs
QDB: No script registered for handling corrupt database.
qdb: processing [TempMainAddressBook]Function - Can't access shared library
and qdb is getting exited immediately.
I have tried following things:
made sure sqlite3 library is added in the make file
source code is in strictly in C by using directive : extern "C" to avoid name mangling as the file extension is .cpp. I also tried with .c extension.
given the absolute path of the libudf.so in qdb.cfg as : Function = udftag#/usr/lib/libudf.so
qdb_funcion struct is properly defined in library's source code only.
tried without using the static declaration of function(mentioned in the qdb docs)
After trying all hits and trials also, I am getting the same error every time which is Can't access shared library
If any one has any idea to resolve this error please share.
Suggestion 1: run qdb by setting LD_DEBUG=1, like in:
LD_DEBUG=1 qdb command line options
This will output a lot of debug information from the dynamic loader as it attempts to locate and then load the .so files. Check what is the path that it output before the "Can't access" message is displayed.
Suggestion 2: obvious but make sure that the permissions are OK for the .so file. Do you have the execution permission set?
Suggestion 3: check if the error message is identical if you completely remove the .so file from the system
Suggestion 4: increase the number of lower-case 'v'-s. QDB likely supports more, with progressively more verbose information provided as you increase the numbers (6 should be enough for full verbosity)

How to force STORE (overwrite) to HDFS in Pig?

When developing Pig scripts that use the STORE command I have to delete the output directory for every run or the script stops and offers:
2012-06-19 19:22:49,680 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 6000: Output Location Validation Failed for: 'hdfs://[server]/user/[user]/foo/bar More info to follow:
Output directory hdfs://[server]/user/[user]/foo/bar already exists
So I'm searching for an in-Pig solution to automatically remove the directory, also one that doesn't choke if the directory is non-existent at call time.
In the Pig Latin Reference I found the shell command invoker fs. Unfortunately the Pig script breaks whenever anything produces an error. So I can't use
fs -rmr foo/bar
(i. e. remove recursively) since it breaks if the directory doesn't exist. For a moment I thought I may use
fs -test -e foo/bar
which is a test and shouldn't break or so I thought. However, Pig again interpretes test's return code on a non-existing directory as a failure code and breaks.
There is a JIRA ticket for the Pig project addressing my problem and suggesting an optional parameter OVERWRITE or FORCE_WRITE for the STORE command. Anyway, I'm using Pig 0.8.1 out of necessity and there is no such parameter.
At last I found a solution on grokbase. Since finding the solution took too long I will reproduce it here and add to it.
Suppose you want to store your output using the statement
STORE Relation INTO 'foo/bar';
Then, in order to delete the directory, you can call at the start of the script
rmf foo/bar
No ";" or quotations required since it is a shell command.
I cannot reproduce it now but at some point in time I got an error message (something about missing files) where I can only assume that rmf interfered with map/reduce. So I recommend putting the call before any relation declaration. After SETs, REGISTERs and defaults should be fine.
Example:
SET mapred.fairscheduler.pool 'inhouse';
REGISTER /usr/lib/pig/contrib/piggybank/java/piggybank.jar;
%default name 'foobar'
rmf foo/bar
Rel = LOAD 'something.tsv';
STORE Rel INTO 'foo/bar';
Once you use the fs command, there a lot of ways to do this. For an individual file, I wound up adding this to the beginning of my scripts:
-- Delete file (won't work for output, which will be a directory
-- but will work for a file that gets copied or moved during the
-- the script.)
fs -touchz top_100
rm top_100
For a directory
-- Delete dir
fs -rm -r out