By default xz compressor uses single thread (eg. to compress newly created packages from AUR). There's a --threads option for using more threads.
Where can I set global settings for xz so that threads option is set to my value? Could'nt find any info in man.
The man page does talk about the use of environment variables for this purpose:
ENVIRONMENT
xz parses space-separated lists of options from the environment variables XZ_DEFAULTS
and XZ_OPT, in this order, before parsing the options from the command line. Note
that only options are parsed from the environment variables; all non-options are
silently ignored. Parsing is done with getopt_long(3) which is used also for the
command line arguments.
XZ_DEFAULTS
User-specific or system-wide default options. Typically this is set in a
shell initialization script to enable xz's memory usage limiter by default.
Excluding shell initialization scripts and similar special cases, scripts must
never set or unset XZ_DEFAULTS.
XZ_OPT This is for passing options to xz when it is not possible to set the options
directly on the xz command line. This is the case e.g. when xz is run by a
script or tool, e.g. GNU tar(1):
XZ_OPT=-2v tar caf foo.tar.xz foo
Scripts may use XZ_OPT e.g. to set script-specific default compression op‐
tions. It is still recommended to allow users to override XZ_OPT if that is
reasonable, e.g. in sh(1) scripts one may use something like this:
XZ_OPT=${XZ_OPT-"-7e"}
export XZ_OPT
It looks like XZ_DEFAULTS will do what you want.
Related
When working on compiled documents (LaTeX, RMarkdown, etc), I usually set up a rule for make using inotifywait to watch the input files and automatically rebuild the output file whenever the input files change.
For example:
dependencies = main.tex
main.pdf: $(dependencies)
latexmk -lualatex --shell-escape $<
watch:
while true; do inotifywait --event modify $(dependencies); $(MAKE); done
I'm now trying to migrate from make to snakemake. How can I set up something similar with snakemake?
Using Snakemake you get the power of Python. For example, you can use inotify python module to wait for updates and run the snakemake.snakemake function each time you detect updates. But it would be much easier to reuse the bash script that you have already: while true; do inotifywait --event modify $(dependencies); snakemake; done.
I'm using Snakemake to execute rules on a SLURM cluster.
One of the mandatory flags for this cluster is ntasks-per-node, which in a batch script would be specified as e.g. #SBATCH --ntasks-per-node=5. My understanding is that I need to specify this in a snakemake rule as
rule rule_name:
...
resources:
time='00:00:30', #30 sec
ntasks-per-node=1
...
However, running this Snakefile I get
SyntaxError in line 14 of .../Snakefile:
keyword can't be an expression
because there are dashes in the name. But as far as I can tell, replacing the dashes with underscores doesn't work. What should I do here?
(I'm using the SLURM profile here if that matters)
Try quoting. But more importantly, only the resources that are defined in the RESOURCE_MAPPING variable in the slurm_submit.py will be picked up, and the default cookiecutter does not include an ntasks-per-node argument. Hence, quoting alone won't solve the issue.
There are multiple options.
Edit the slurm_submit.py. Add the ntasks-per-node argument and provide whatever alias(es) you would like to use.
RESOURCE_MAPPING = {
"time": ("time", "runtime", "walltime"),
"mem": ("mem", "mem_mb", "ram", "memory"),
"mem-per-cpu": ("mem-per-cpu", "mem_per_cpu", "mem_per_thread"),
"nodes": ("nodes", "nnodes"),
# some suggested aliases
"ntasks-per-node": ("ntasks-per-node", "ntasks_per_node", "ntasks")
}
I would only do this if there actually are situations where you might change this value.
Define an invocation-level configuration. Snakemake's --cluster_config parameter can still be used to provide additional configuration settings. In this case, a file like
# myslurm.yaml
__default__:
ntasks-per-node: 1
Then use it with
snakemake --profile slurm --cluster_config myslurm.yaml
This is likely the least work to get going.
Define a global value in the profile. The cookiecutter profile generator provides multiple options to define global options that don't often need to change for the profile.
In my .ini file I have
[behave]
format=rerun
outfiles=rerun_failing.features
So I want to use "rerun_failing.features" file for storing scenarios that fail.
However when I run '--steps-catalog' command, it also stores that catalog to the same file. Why is that?
How to make set up two separate files for commands '--rerun' and '--steps-catalog'?
Thanks!
Use behave --dry-run -f steps.catalog ... instead. The output of the steps.catalog formatter is written to stdout, not the "rerun-outputfile".
I have a script say sql_result.sh in directory /tmp/SQL_QUERY which just calls a sql script in the same location and executes the sql commands.
Code:
sqlplus -S $MY_UN/$MY_PW#$MY_DB <<!
set serveroutput on;
#/tmp/SQL_QUERY/sql_file1
quit
!
However, if I have say 2 SQL files sql_file1.sql and sql_file1.sql_new in that directory. Which of the either sql scripts will my unix script pick? How and why?
Thanks
Short answer: normally sql_file1.sql
The default extension is .SQL as explained in the docs for #:
file_name[.ext]
Represents the script you wish to run. If you omit ext, SQL*Plus assumes the default
command-file extension (normally SQL). For information on changing the default extension,
see the SUFFIX variable of the SET command.
As it says, you can use the SET command to change which extension is used, or you can specify the extension in the script explicitly.
The SET command says:
Sets the default file extension that SQL*Plus uses in commands that refer to scripts. SUFFIX does not control extensions for spool files.
Example
To change the default command-file extension from the default, .SQL to .UFI, enter
SET SUFFIX UFI
If you then enter
#EXAMPLE
SQL*Plus will look for a file named EXAMPLE.UFI instead of EXAMPLE.SQL.
(Note that a SET command might be present in your LOGIN file.)
Can someone concisely explain what the differences between the three variables below are? Because in all honesty, when I create a Jenkins job, I randomly guess between the three types until something works, but I'd love to understand rather than blindly picking.
${ENV,var="BUILD_USER"}
${BUILD_USER}
$BUILD_USER
Also, are there other ways of writing variables in Jenkins that I missed other than the 3 ways above?
When used in a statement:
${ENV,var="BUILD_USER"}--evaluates the system environment variables and returns the value for the variable BUILD_USER.
example: curl ${ENV,var="BUILD_USER"}/api/xml
${BUILD_USER} --returns the value of the BUILD_USER variable in the current script memory space.
example: curl ${BUILD_USER}/api/xml
$BUILD_USER--used to assign values to the BUILD_USER variable.
example: $BUILD_USER = "BUILD_USER"
In general, variable expansion is up to the plugin that interprets a configuration value.
For example, if you set up a job parameter GIT_REPOSITORY and use it to configure an address where git clone should go by putting $GIT_REPOSITORY into the git repository field, it works, but only because the Jenkins git plugin has implemented variable expansion support.
Many plugins do implement it but you cannot know it unless you test it. However, these days the support is so common it is safe to assume it should work.
Both forms of reference, $VAR and ${VAR}, work and are equivalent. The latter form is useful if you need to use the variable in a place where it is surrounded by other characters that could be interpreted as part of variable, like $VARX (Jenkins would be looking for variable named VARX) and ${VAR}X (Jenkins understands the variable is named VAR).
These rules have been modeled after variable expansion rules in Unix shells. Indeed, the job variables are made available as environment variables to build steps and in the Unix shell build step the variables are used the same way as above.
In a Windows CMD build step the variables are again used like any Windows environment variable: %VAR%.