I wonder if it is possible to remap "$" and "#" to other keys.
sample:
#set( $foo = "bar" )
I want to use other keys because those interfere with another syntax of a script I am using.
$ and # characters are not configurable in Velocity. Even at compile time, it would at least imply to recompile the parser, and make a full code review for standalone $ and # chars...
That said:
Velocity does cope pretty well with syntax fragments it cannot parse, like jQuery $ object. It just render them as is, and most of the time it does the job.
You can escape your other script's sensitive characters whenever needed, for instance by using the EscapeTool: ${esc.d} for dollar, ${esc.h} for hash.
Related
I'm using Snakemake to execute rules on a SLURM cluster.
One of the mandatory flags for this cluster is ntasks-per-node, which in a batch script would be specified as e.g. #SBATCH --ntasks-per-node=5. My understanding is that I need to specify this in a snakemake rule as
rule rule_name:
...
resources:
time='00:00:30', #30 sec
ntasks-per-node=1
...
However, running this Snakefile I get
SyntaxError in line 14 of .../Snakefile:
keyword can't be an expression
because there are dashes in the name. But as far as I can tell, replacing the dashes with underscores doesn't work. What should I do here?
(I'm using the SLURM profile here if that matters)
Try quoting. But more importantly, only the resources that are defined in the RESOURCE_MAPPING variable in the slurm_submit.py will be picked up, and the default cookiecutter does not include an ntasks-per-node argument. Hence, quoting alone won't solve the issue.
There are multiple options.
Edit the slurm_submit.py. Add the ntasks-per-node argument and provide whatever alias(es) you would like to use.
RESOURCE_MAPPING = {
"time": ("time", "runtime", "walltime"),
"mem": ("mem", "mem_mb", "ram", "memory"),
"mem-per-cpu": ("mem-per-cpu", "mem_per_cpu", "mem_per_thread"),
"nodes": ("nodes", "nnodes"),
# some suggested aliases
"ntasks-per-node": ("ntasks-per-node", "ntasks_per_node", "ntasks")
}
I would only do this if there actually are situations where you might change this value.
Define an invocation-level configuration. Snakemake's --cluster_config parameter can still be used to provide additional configuration settings. In this case, a file like
# myslurm.yaml
__default__:
ntasks-per-node: 1
Then use it with
snakemake --profile slurm --cluster_config myslurm.yaml
This is likely the least work to get going.
Define a global value in the profile. The cookiecutter profile generator provides multiple options to define global options that don't often need to change for the profile.
Perl 6 Plain-Old-Documentation (perhaps Fancy-New-Documentation) has some features that allow it to construct documentation for things it sees, and the documentation shows up in the $=pod variable at runtime.
However, I was surprised when I couldn't read the docs when I'd made an error in the program text. Here I've left out a statement separator between two statements:
use v6;
BEGIN { put "BEGIN" }
INIT { put "INIT" }
CHECK { put "CHECK" }
"foo" "bar";
DOC INIT { put "DOC INIT" }
DOC BEGIN { put "DOC BEGIN" }
DOC CHECK { put "DOC CHECK" }
=begin pod
=head1 This is a title
This is a bit of pod
=end pod
When I run it with the --doc switch, the program syntax matters (and BEGIN runs):
$ perl6 --doc doc.p6
BEGIN
===SORRY!=== Error while compiling ...
Two terms in a row
------> "foo"⏏ "bar";
expecting any of:
infix
infix stopper
statement end
statement modifier
statement modifier loop
When I fix it, I get some warnings (so, perl6 is compiling) and the compilation-time phasers run:
BEGIN
DOC BEGIN
DOC CHECK
CHECK
WARNINGS for /Users/brian/Desktop/doc.p6:
Useless use of constant string "bar" in sink context (line 9)
Useless use of constant string "foo" in sink context (line 9)
INIT
DOC INIT
This is a title
This is a bit of pod
We already know this is a bit dangerous in Perl 5. A perl -c and a BEGIN block can run code. See How to check if a Perl script doesn't have any compilation errors?. I don't think this is any more dangerous than something we already know, but now it's happening at a time when I'm not explicitly asking something to compile program statements.
I haven't delved into the details of Perl 6 pod and why this might be necessary outside of declarator blocks and .WHY (a cool feature), but it seems like this can lead to trouble. Is there perhaps an external program that might extract the Pod? Or a way to do without the declarators unless the program will run?
Yes, the whole file has to be parsed, which in turn requires running BEGIN and use statements and such.
The Perl 6 language is designed for one-pass parsing from top to bottom, so that at any given point the parser understands what it is parsing based on what it has parsed so far.
Consider code like the following:
say "
=begin pod
Not POD, just a string!
";
If you'd just grep the file for POD statements without parsing all of it, it would misinterpret this piece of code.
I.e. you can't parse only the POD parts without parsing the normal Perl 6 code parts, because without parsing it all from top to bottom you can't know which is which.
PS: In theory, the Perl 6 designers could have accommodated POD-only parsing, by making it illegal for normal Perl 6 code to contain lines that look like they start a POD block. In this scenario, the above code snippet would be a syntax error when the whole file is parsed, because starting a line inside a string literal with =begin pod would be disallowed, so the --pod switch could rely on all lines that begin with =begin foo actually starting a POD block.
Such a restriction probably wouldn't be a major burden for normal Perl 6 code (after all, who needs to write =begin pod at the start of a line in a multi-line string literal), but note that one of the reasons for the one-pass top-to-bottom parsing architecture is to facilitate language extensibility via slangs.
E.g. CPAN modules could add support for users writing a single subroutine (or other lexical scope) in another language or DSL. (Implementing such modules isn't actually possible yet without hacking into Rakudo internals via NQP, but once the macro/slang design is complete, it will be).
The burden for disallowing lines that look like they start a POD block, would then be passed on to all those slang parsers.
You could always submit a feature request for Larry and the other Perl 6 designers to consider this, though.
One of my recipes in Yocto need to create a file containing a very specific line, something like:
${libdir}/something
To do this, I have the recipe task:
do_install() {
echo '${libdir}/something' >/path/to/my/file
}
Keeping in mind that I want that string exactly as shown, I can't figure out how to escape it to prevent bitbake from substituting in its own value of libdir.
I originally thought the echo command with single quotes would do the trick (as it does in the bash shell) but bitbake must be interpreting the line before passing it to the shell. I've also tried escaping it both with $$ and \$ to no avail.
I can find nothing in the bitbake doco about preventing variable expansion, just stuff to do with immediate, deferred and Python expansions.
What do I need to do to get that string into the file as is?
Bitbake seems to have particular issues in preventing expansion from taking place. Regardless of whether you use single or double quotes, it appears that the variables will be expanded before being passed to the shell.
Hence, if you want them to not be expanded, you need to effectively hide them from BitBake, and this can be done with something like:
echo -e '\x24{libdir}/something' >/path/to/my/file
This uses the hexadecimal version of $ so that BitBake does not recognise it as a variable to be expanded.
You do need to ensure you're running the correct echo command however. Under some distros (like Ubuntu), it might run the sh-internal echo which does not recognise the -e option. In order to get around that, you may have to run the variant of echo that lives on the file system (and that does recognise that option):
/bin/echo -e '\x24{libdir}/something' >/path/to/my/file
By default this task will be executed as shell function via /bin/sh, but it depends on your system what it will be as you can have a symlink named /bin/sh pointing to bash. The BitBake's manual prevents from using bashism syntax though.
You can consider just adding this task in your recipe as python function:
python do_install () {
with open('/path/to/your/file', 'a') as file:
file.write('${libdir}/something')
}
'a' stands for append.
This should eliminate the problem with variable expansion.
There is no standard way to escape these sorts of expressions that I am aware of, other than to try to break up the expression - accordingly this should work:
do_install() {
echo '$''{libdir}/something' >/path/to/my/file
}
The best solution is simply this:
bitbake_function() {
command $libdir/whatever
}
Bitbake will only expand ${libdir}; $libdir is passed through verbatim.
We don't have to worry about dollar signs that are not followed by {, and in this case, there is no need for libdir to be wrapped in braces.
The only time we run into a problem with just $foo is if we have something like ${foo}bar where the braces are required as delimiters so that bar isn't included into the variable name. In that situation, there are other solutions, such as for instance generating the shell syntax "$foo"bar. This is less cryptic than resorting to \x24.
If you need to use $ in variable assignment, remember that bitbake won't evaluate $whatever so you have to escape it for the underlying shell.
For instance I set gcc/ld Rpath option to use $ORIGIN keyword this way:
TARGET_LDFLAGS_append = " -Wl,-rpath-link=\\$$ORIGIN"
https://lists.yoctoproject.org/pipermail/yocto/2017-September/037820.html
You can define a variable to be a literal dollar sign.
DOLLAR = "$"
do_install() {
echo '${DOLLAR}{libdir}/something' >/path/to/my/file
}
no extra quoting required.
Seems unlikely, but is there any way to generate a set of unit tests for the following rewrite rule:
RewriteRule ^/(user|group|country)/([a-z]+)/(photos|videos)$ http:/whatever?type=$1&entity=$2&resource=$3
From this I'd like to generate a set of urls of the form:
/user/foo/photos
/user/bar/photos
/group/baz/videos
/country/bar/photos
etc...
The reason I don't want to just do this once by hand is that I'd like the bounded alternation groups (e.g. (user|group|country)) to be able to grow and maintain coverage without having the update the tests by hand.
Is there a rewrite rule or regex parser that might be able to do this, or am I doing it by hand?
If you don't mind hacking a few lines of Perl then there's a package, Regexp::Genex that you can use to generate something close to what you require e.g.
# perl -MRegexp::Genex=:all -le 'print for strings(qr/\/(user|group|country)\/([a-z]+)\//)'
/user/dxb/
/user/dx/
/user/d/
/group/xd/
/group/x/
# perl -MRegexp::Genex=:all -le 'my $re=qr/\/(user|group|country)\/([a-z]+)\/(phone|videos)/;$Regexp::Genex::DEFAULT_LEN = length $re;print for strings($re)'
/user/mgcgmccdmgdmmzccgmczgmzzdcmmd/phone
/user/mgcgmccdmgdmmzccgmczgmzzdcmm/phone
/user/mgcgmccdmgdmmzccgmczgmzzdcm/phone
/user/mgcgmccdmgdmmzccgmczgmzzdc/phone
...
/group/gg/videos
/group/g/phone
/group/g/videos
/country/jvmmm/phone
/country/jvmmm/videos
/country/jvmm/phone
/country/jvmm/videos
/country/jvm/phone
/country/jvm/videos
/country/jv/phone
/country/jv/videos
/country/j/phone
/country/j/videos
#
Note:
1) You'll need to write a wrapper to parse the source file, tokenise (extract) the source patterns, escape certain characters in the rule e.g. "/", and possibly split your rules into more manageable parts, before expanding, via Genex, and then outputting the results, in the desired format.
2) To install the module type: cpan Regexp::Genex
I'm using an awk script to do some reasonably heavy parsing that could be useful to repeat in the future but I'm not sure if my unix-unfriendly co-workers will be willing to install awk/gawk in order to do the parsing. Is there a way to create a self-contained executable from my script?
I'm not aware of a way to make a self-contained binary using AWK. However, if you like AWK, chances seem good that you might like Python, and there are several ways to make a self-contained Python program. For example, Py2Exe.
Here's a quick example of Python:
# comments are introduced by '#', same as AWK
import re # make regular expressions available
import sys # system stuff like args or stdin
# read from specified file, else read standard input
if len(sys.argv) == 2:
f = open(sys.argv[1])
else:
f = sys.stdin
# Compile some regular expressions to use later.
# You don't have to pre-compile, but it's more efficient.
pat0 = re.compile("regexp_pattern_goes_here")
pat1 = re.compile("some_other_regexp_here")
# for loop to read input lines.
# This assumes you want normal line separation.
# If you want lines split on some other character, you would
# have to split the input yourself (which isn't hard).
# I can't remember ever changing the line separator in my AWK code...
for line in f:
FS = None # default: split on whitespace
# change FS to some other string to change field sep
words = line.split(FS)
if pat0.search(line):
# handle the pat0 match case
elif pat1.search(line):
# handle the pat1 match case
elif words[0].lower() == "the":
# handle the case where the first word is "the"
else:
for word in words:
# do something with words
Not the same as AWK, but easy to learn, and actually more powerful than AWK (the language has more features and there are many "modules" to import and use). Python doesn't have anything implicit like the
/pattern_goes_here/ {
# code goes here
}
feature in AWK, but you can simply have an if/elif/elif/else chain with patterns to match.
Theres a standalone awk.exe in the Cygwin Toolkit as far as I know.
You could just bundle that in with whatever files you're distributing to your colleagues.
Does it have to be self contained? You could write a small executable that will invoke awk with the right arguments and pipe the results to a file the users chooses, or to stdout - whichever is appropriate for your co-workers.
MAWK in GnuWin32 — http://gnuwin32.sourceforge.net/packages/mawk.htm
also interesting alternative, Java implementation — http://sourceforge.net/projects/jawk/