I would like to pass many macros into rpmuild without having to type out each macro manually, or even have a long makefile with -D 'foo bar' -D 'foo bar' -D 'foo bar' many times. I want to pass these macros into rpmbuild "all at once".
Let me describe my use case - I have a script called buildid that reports information about a build;
user#host: buildid -k tag
1.8.0-1444293343
user#host: buildid -k buildhost.platform
Linux-4.0.7-300.fc22.x86_64-x86_64-with-fedora-22-Twenty_Two
user#host: buildid -k version.formatted.gnu
1.8.0
I use these values in a RPM .spec file like this;
rpmbuild -ba foo.spec -D "tag `buildid -k tag`" -D "buildhost_platform `buildid -k buildhost.platform`" -D "version `buildid -k version.formatted.gnu`"
This is the sucky part - a long command line, with lots of typing. Even if I use a Makefile, it's still ugly.
My buildid script is pretty flexible though, and can save these buildid values to a file (.buildid_rpmmacros) or whatever, but better, can just print them out in a nice format like this;
user#host: buildid -f rpmmacros
%buildhost.hostname myhost.example.com
%buildhost.platform Linux-4.0.7-300.fc22.x86_64-x86_64-with-fedora-22-Twenty_Two
%buildhost.release 4.0.7-300.fc22.x86_64
%buildhost.system Linux
%buildhost.version #1 SMP Mon Jun 29 22:15:06 UTC 2015
%git.branch master
%git.revision 48a30d610cf1ab57dcc6947b2366b6a5e9a1fcc6
%git.revision.short 48a30d6
%tag 1.8.0-1444293343
%timestamp 1444293343
%version.formatted.gnu 1.8.0
%version.formatted.short 1.8.0
%version.formatted.win 1.8.0.0
%version.major 1
%version.minor 8
%version.release
%version.revision 0
If I could do something like this, it would be ideal;
rpmbuild -ba foo.spec --macros-stdin < `buildid -f rpmmacros`
Finally, the macros are project/RPM specific, not global. This means storing them in ~/.rpmmacros would not be a viable solution. I can save to a file easily (buildid -nF rpmmacros), but I'm already persisting them to a file in ini format, and just want to output them temporarily in RPM macro format (buildid -f rpmmacro)
Shameless plug - if you're interested in the buildid tool; https://github.com/jamesread/buildid
If you want to have reproducible builds you should use 'mock' rather than directly calling rpmbuild.
Mock can have configs and you put those macros to those config. E.g:
$ cp /etc/mock/fedora-22-x86_64.cfg ~/my-project-fedora-22-x86_64.cfg
$ vi ~/my-project-fedora-22-x86_64.cfg
put there this line:
config_opts['macros']['%Add_your_macro_name_here'] = "add macro value here"
And now you can build it with those macros defined:
$ mock -r ~/my-project-fedora-22-x86_64.cfg foo.src.rpm
Related
I'm trying to use the --immediate-submit on a PBSPro cluster. I tried using an in-place modification of the dependencies string to adapt it to PBSPro, similar to what is done here.
snakemake --cluster "qsub -l wd -l mem={cluster.mem}GB -l ncpus={threads} -e {cluster.stderr} -q {cluster.queue} -l walltime={cluster.walltime} -o {cluster.stdout} -S /bin/bash -W $(echo '{dependencies}' | sed 's/^/depend=afterok:/g' | sed 's/ /:/g')"
This last part gets converted into, for example:
-W depend=afterok: /g/data1a/va1/dk0741/analysis/2018-03-25_marmo_test/.snakemake/tmp.cyrhf51c/snakejob.trimmomatic_pe.7.sh
There are two problems here:
How can I get the dependencies string to output job ID instead of the script path? The qsub command normally outputs the job ID to stdout, so I'm not sure why it's not doing so here.
How do I get rid of the space after afterok:? I've tried everything!
As an aside, it would be helpful if there were some option to debug the submission or not to delete the tmp.cyrhf51c directory in .snakemake -- is there some way to do this?
Thanks,
David
I suggest to use a profile for this, instead of trying to find an ad-hoc solution. This will also help with debugging. E.g., there is already a pbs-torque profile available (https://github.com/Snakemake-Profiles/pbs-torque), probably there is not much to change towards pbspro?
My problem is that I have a cluster-server with Torque PBS and want to use it to run a sequence-comparison with the program rapsearch.
The normal RapSearch command is:
./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32
Now I want to run it with 2 nodes on the cluster-server.
I've tried with: echo "./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32" | qsub -l nodes=2 but nothing happened.
Do you have any suggestions? Where I'm wrong? Help please.
Standard output (and error output) files are placed in your home directory by default; take a look. You are looking for a file named STDIN.e[numbers], it will contain the error message.
However, I see that you're using ./rapsearch but are not really being explicit about what directory you're in. Your problem is therefore probably a matter of changing directory into the directory that you submitted from. When your terminal is in the directory of the rapsearch executable, try echo "cd \$PBS_O_WORKDIR && ./rapsearch [arguments]" | qsub [arguments] to submit your job to the cluster.
Other tips:
You could add rapsearch to your path if you use it often. Then you can use it like a regular command anywhere. It's a matter of adding the line export PATH=/full/path/to/rapsearch/bin:$PATH to your .bashrc file.
Create a submission script for use with qsub. Here is a good example.
my task is to analyze a memory dump. I've found the location of a PDF-File and I want to analyze it with virustotal. But I can't figure out how to "download" it from the memory dump.
I've already tried it with this command:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/
But in my dumpfile-directory there is just a .vacb file which is not a valid pdf.
I think you may have missed a command line argumenet from your command:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/
If you are not getting a .dat file in your output folder you can add -u:
-u, --unsafe Relax safety constraints for more data
Can't test this with out access to the dump but you should be able to rename the .dat file created to .pdf.
So it should look something like this:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/ -u
You can check out the documentation on the commands here
VACB is "virtual address control block". Your output type seems to be wrong.
Try something like:
$ python vol.py -f img.vmem dumpfiles --output=pdf --output-file=bla.pdf --profile=[your profile] -D dumpfiles/
or check out the cheat sheet: here
Looking for the origin of this error message:
Processing: +([^_]).flv
date: +([^_]).flv: No such file or directory
I started getting this at some point in the last few months (can't say when as I wasn't logging my cron output. I know, I know!).
When I originally wrote this, it worked ok for at least two months. I'm wondering if there was an sh update that broke it?
The script runs via crontab and gets all .flv files in the current directory without an underscore and processes each one. It then checks the modified date for files that have been created in the last 24 hours and runs the yamdi meta tag injector for .flv files.
It seems like it's not recognizing the pattern as a pattern and looking for it as an actual file to me. If I run this script from an ssh shell it works ok, it's only when running via cron that it gives this error.
shopt -s extglob
now=$(date +"%s")
for f in +([^_]).flv; do
echo "Processing: $f"
age=$(date -r "$f" +"%s")
calc=$(((now-age) / 60 / 60))
if(( calc < 24 )); then
echo "$f age=$calc"
yamdi -i "$f" -o "$f".seek
rm "$f"
cp "$f".seek "$f"
touch -d #$age "$f"
fi
done
This is most likely a problem of the wrong shell being used; make sure your script's first line represents the right shell:
#!/bin/bash
for bash, or whatever shell you wrote this for. You might want to check your environment variables that cron may set (that's a very common problem -- one assumes everything is set up correctly, but the environment that cron offers to scripts it executes is different).
What is the difference between these two redirections?
[localhost ~]$ echo "something" > a_file.txt
[localhost ~]$ echo "something" >| a_file.txt
I can't seem to find any documentation about >| in the help.
>| overrides the noclobber option in the shell (set with $ set -o noclobber, indicates that files can not be written over).
Basically, with noclobber, you get an error if you try to overwrite an existing file using >:
$ ./program > existing_file.txt
bash: existing_file.txt: cannot overwrite existing file
$
Using >| will override that error and force the file to be written over:
$ ./program >| existing_file.txt
$
It's analogous to using the -f or --force option on many shell commands.
From the Bash Reference Manual Section "3.6.2 Redirecting Output":
If the redirection operator is >, and the noclobber option to the set builtin has been enabled, the redirection will fail if the file whose name results from the expansion of word exists and is a regular file. If the redirection operator is >|, or the redirection operator is > and the noclobber option is not enabled, the redirection is attempted even if the file named by word exists.
Searching for "bash noclobber" generally brings up articles that mention this somewhere. See this question on SuperUser, this section in O'Reilly's "Unix Power Tools", and this Wikipedia article on Clobbering for examples.