Multiplication and addition of Netcdf files - multiplication

I have two timeseries netcdf files with the same number of steps:
U.nc with variable name u10.
V.nc with variable name v10.
Now I want to add multiply U.nc with U.nc
similarily, V.nc with V.nc.
I also want to add U.nc with V.nc., the variables u10 and v10 should be added.
How can I do this?

A similar answer to Adrian Tompkins's above.
cdo -L -expr,'wind=sqrt(u10*u10+v10*v10)' -merge u.nc v.nc uv.nc wind.nc
This uses method chaining. Depending on how CDO was built, you may or may not need -L.

you can do it with CDO
Adding u with u:
cdo mul u.nc u.nc ubyu.nc
and
cdo ubyu.nc vbyv.nc usumv.nc
However it seems what you want to do is make the wind vector, for that you can merge the files and then use the expr operator
cdo merge u.nc v.nc uv.nc
cdo expr,'wind=sqrt(u10*u10+v10*v10)' uv.nc wind.nc
See the tutorial here for more details on the the expr operator

Related

Raku-native disk space usage

Purpose:
Save a program that writes data to disk from vain attempts of writing to a full filesystem;
Save bandwidth (don't download if nowhere to store);
Save user's and programmer's time and nerves (notify them of the problem instead of having them tearing out their hair with reading misleading error messages and "why the heck this software is not working!").
The question comes in 2 parts:
Reporting storage space statistics (available, used, total etc.), either of all filesystems or of the filesystem that path in question belongs to.
Reporting a filesystem error on running out of space.
Part 1
Share please NATIVE Raku alternative(s) (TIMTOWTDIBSCINABTE "Tim Toady Bicarbonate") to:
raku -e 'qqx{ df -P $*CWD }.print'
Here, raku -executes df (disk free) external program via shell quoting with interpolation qqx{}, feeding -Portable-format argument and $*CWD Current Working Directory, then .prints the df's output.
The snippet initially had been written as raku -e 'qqx{ df -hP $*CWD }.print' — with both -human-readable and -Portable — but it turned out that it is not a ubiquitously valid command. In OpenBSD 7.0, it exits with an error: df: -h and -i are incompatible with -P.
For adding human-readability, you may consider Number::Bytes::Human module
raku -e 'run <<df -hP $*CWD>>'
If you're just outputting what df gives you on STDOUT, you don't need to do anything.
The << >> are double quoting words, so that the $*CWD will be interpolated.
Part 1 — Reporting storage space statistics
There's no built in function for reporting storage space statistics. Options include:
Write Raku code (a few lines) that uses NativeCall to invoke a platform / filesystem specific system call (such as statvfs()) and uses the information returned by that call.
Use a suitable Raku library. FileSystem::Capacity picks and runs an external program for you, and then makes its resulting data available in a portable form.
Use run (or similar1) to invoke a specific external program such as df.
Use an Inline::* foreign language adaptor to enable invoking of a foreign PL's solution for reporting storage space statistics, and use the info it provides.2
Part 2 — Reporting running out of space
Raku seems to neatly report No space left on device:
> spurt '/tmp/failwrite', 'filesystem is full!'
Failed to write bytes to filehandle: No space left on device
in block <unit> at <unknown file> line 1
> mkdir '/tmp/failmkdir'
Failed to create directory '/tmp/failmkdir' with mode '0o777': Failed to mkdir: No space left on device
in block <unit> at <unknown file> line 1
(Programmers will need to avoid throwing away these exceptions.)
Footnotes
1 run runs an external command without involving a shell. This guarantees that the risks attendant with involving a shell are eliminated. That said, Raku also supports use of a shell (because that can be convenient and appropriate in some scenarios). See the exchange of comments under the question (eg this one) for some brief discussion of this, and the shell doc for a summary of the risk:
All shell metacharacters are interpreted by the shell, including pipes, redirects, environment variable substitutions and so on. Shell escapes are a severe security concern and can cause confusion with unusual file names. Use run if you want to be safe.
2 Foreign language adaptors for Raku (Raku modules in the Inline:: namespace) allow Raku code to use code written in other languages. These adaptors are not part of the Raku language standard, and most are barely experimental status, if that, but, conversely, the best are in great shape and allow Raku code to use foreign libraries as if they were written for Raku. (As of 2021 Inline::Perl5 is the most polished.)

subtracting variables within two different netcdf files

I have two netcdf files: downwelling radiation named rsds.nc and confined radiation named rsns.nc. rsds.nc contains a variable named rsds and rsns.nc contains a variable named rsns. Now I would like to have the upwelling radiation rsus.nc by subtracting the variables within rsds.nc and rsns.nc, respectively.
I tried the following methods:
ncdiff rsds.nc rsns.nc rsus.nc
ncbo op_typ=diff rsds.nc rsns.nc rsus.nc
All of them produced a rsus.nc but the variable rsus, within this file is missing. Any idea of why this is so?
Two alternative CDO solutions.
The sledgehammer approach
The cdo sub command can do this on one line:
cdo sub rsds.nc rsns.nc rsus.nc
You will get the warning
cdo sub (Warning): Input streams have different parameters!
But you can ignore that. Note that you may want to change the variable name to something more appropriate, so you can do this on one line as:
cdo setname,rsus -sub rsds.nc rsns.nc rsus.nc
The expr approach
This just mimics the accepted answer but in cdo equivalent commands
cdo cat rsds.nc rsns.nc tmp.nc
cdo expr,"rsus=rsds-rsns" temp.nc rsus.nc
By the way, with all solutions on Thurs page you might want to use cdo chgattr or the nco equivalent to ensure the meta data is correct for your new field, especially if you intend to keep the files long term or pass them to other people.
As an alternative to #RichSignell's answer, you can combine variables into a single file and use ncap2 to perform the subtraction without renaming variables.
ncks -A rsns.nc rsds.nc
ncap2 -s 'rsus=(rsds-rsns)' rsds.nc rsus.nc
Only variables with the same name are operated on when you ncdiff two files. So one solution would be to simply rename the variable in one of the files so it is the same. For example, try this:
ncrename -v rsds,rsns rsds.nc
ncdiff rsds.nc rsns.nc rsus.nc

make/0 functionality for SICStus

How can I ensure that all modules (and ideally also all other files that have been loaded or included) are up-to-date? When issuing use_module(mymodule), SICStus compares the modification date of the file mymodule.pl and reloads it, if newer. Also include-ed files will trigger a recompilation. But it does not recheck all modules used by mymodule.
Brief, how can I get similar functionality as SWI offers it with make/0?
There is nothing in SICStus Prolog that provides that kind of functionality.
A big problem is that current Prologs are too dynamic for something like make/0 to work reliably except for very simple cases. With features like term expansion, goals executed during load (including file loading goals, which is common), etc., it is not possible to know how to reliably re-load files. I have not looked closely at it, but presumably make/0 in SWI Prolog has the same problem.
I usually just re-start the Prolog process and load the "main" file again, i.e. a file that loads everything I need.
PS. I was not able to get code formatting in the comments, so I put it here instead: Example why make/0 needs to guard against 'user' as the File from current_module/2:
| ?- [user].
% compiling user...
| :- module(m,[p/0]). p. end_of_file.
% module m imported into user
% compiled user in module m, 0 msec 752 bytes
yes
| ?- current_module(M, F), F==user.
F = user,
M = m ? ;
no
| ?-
So far, I have lived with several hacks:
Up to 0.7 – pre-module times
SICStus always had ensure_loaded/1 of Quintus origin, which was not only a directive (like in ISO), but was also a command. So I wrote my own make-predicate simply enumerating all files:
l :-
ensure_loaded([f1,f2,f3]).
Upon issuing l., only those files that were modified in the meantime were reloaded.
Probably, I could have written this also like — would I have read the meanual (sic):
l :-
\+ ( source_file(F), \+ ensure_loaded(F) ).
3.0 – modules
With modules things changed a bit. On the one hand, there were those files that were loaded manually into a module, like ensure_loaded(module:[f1,f2,f3]), and then those that were clean modules. It turned out, that there is a way to globally ensure that a module is loaded — without interfering with the actual import lists simply by stating use_module(m1, []) which is again a directive and a command. The point is the empty list which caused the module to be rechecked and reloaded but thanks to the empty list that statement could be made everywhere.
In the meantime, I use the following module:
:- module(make,[make/0]).
make :-
\+ ( current_module(_, F), \+ use_module(F, []) ).
This works for all "legal" modules — and as long as the interfaces do not change. What I still dislike is its verboseness: For each checked and unmodified module there is one message line. So I get a page full of such messages when I just want to check that everything is up-to-date. Ideally such messages would only show if something new happens.
| ?- make.
% module m2 imported into make
% module m1 imported into make
% module SU_messages imported into make
yes
| ?- make.
% module m2 imported into make
% module m1 imported into make
% module SU_messages imported into make
yes
An improved version takes #PerMildner's remark into account.
Further files can be reloaded, if they are related to exactly one module. In particular, files loading into module user are included like the .sicstusrc. See above link for the full code.
% reload files that are implicitly modules, but that are still simple to reload
\+ (
source_file(F),
F \== user,
\+ current_module(_, F), % not officially declared as a module
setof(M,
P^ExF^ExM^(
source_file(M:P,F),
\+ current_module(M,ExF), % not part of an official module
\+ predicate_property(M:P,multifile),
\+ predicate_property(M:P,imported_from(ExM))
),[M]), % only one module per file, others are too complex
\+ ensure_loaded(M:F)
).
Note that in SWI neither ensure_loaded/1 nor use_module/2 compare file modification dates. So both cannot be used to ensure that the most recent version of a file is loaded.

How exactly do you use variables in Jenkins?

Can someone concisely explain what the differences between the three variables below are? Because in all honesty, when I create a Jenkins job, I randomly guess between the three types until something works, but I'd love to understand rather than blindly picking.
${ENV,var="BUILD_USER"}
${BUILD_USER}
$BUILD_USER
Also, are there other ways of writing variables in Jenkins that I missed other than the 3 ways above?
When used in a statement:
${ENV,var="BUILD_USER"}--evaluates the system environment variables and returns the value for the variable BUILD_USER.
example: curl ${ENV,var="BUILD_USER"}/api/xml
${BUILD_USER} --returns the value of the BUILD_USER variable in the current script memory space.
example: curl ${BUILD_USER}/api/xml
$BUILD_USER--used to assign values to the BUILD_USER variable.
example: $BUILD_USER = "BUILD_USER"
In general, variable expansion is up to the plugin that interprets a configuration value.
For example, if you set up a job parameter GIT_REPOSITORY and use it to configure an address where git clone should go by putting $GIT_REPOSITORY into the git repository field, it works, but only because the Jenkins git plugin has implemented variable expansion support.
Many plugins do implement it but you cannot know it unless you test it. However, these days the support is so common it is safe to assume it should work.
Both forms of reference, $VAR and ${VAR}, work and are equivalent. The latter form is useful if you need to use the variable in a place where it is surrounded by other characters that could be interpreted as part of variable, like $VARX (Jenkins would be looking for variable named VARX) and ${VAR}X (Jenkins understands the variable is named VAR).
These rules have been modeled after variable expansion rules in Unix shells. Indeed, the job variables are made available as environment variables to build steps and in the Unix shell build step the variables are used the same way as above.
In a Windows CMD build step the variables are again used like any Windows environment variable: %VAR%.

make targets depend on variables

I want (GNU) make to rebuild when variables change. How can I achieve this?
For example,
$ make project
[...]
$ make project
make: `project' is up to date.
...like it should, but then I'd prefer
$ make project IMPORTANTVARIABLE=foobar
make: `project' is up to date.
to rebuild some or all of project.
Make wasn't designed to refer to variable content but Reinier's approach shows us the workaround. Unfortunately, using variable value as a file name is both insecure and error-prone. Hopefully, Unix tools can help us to properly encode the value. So
IMPORTANTVARIABLE = a trouble
# GUARD is a function which calculates md5 sum for its
# argument variable name. Note, that both cut and md5sum are
# members of coreutils package so they should be available on
# nearly all systems.
GUARD = $(1)_GUARD_$(shell echo $($(1)) | md5sum | cut -d ' ' -f 1)
foo: bar $(call GUARD,IMPORTANTVARIABLE)
#echo "Rebuilding foo with $(IMPORTANTVARIABLE)"
#touch $#
$(call GUARD,IMPORTANTVARIABLE):
rm -rf IMPORTANTVARIABLE*
touch $#
Here you virtually depend your target on a special file named $(NAME)_GUARD_$(VALUEMD5) which is safe to refer to and has (almost) 1-to-1 correspondence with variable's value. Note that call and shell are GNU Make extensions.
You could use empty files to record the last value of your variable by using something like this:
someTarget: IMPORTANTVARIABLE.$(IMPORTANTVARIABLE)
#echo Remaking $# because IMPORTANTVARIABLE has changed
touch $#
IMPORTANTVARIABLE.$(IMPORTANTVARIABLE):
#rm -f IMPORTANTVARIABLE.*
touch $#
After your make run, there will be an empty file in your directory whose name starts with IMPORTANTVARIABLE. and has the value of your variable appended. This basically contains the information about what the last value of the variable IMPORTANTVARIABLE was.
You can add more variables with this approach and make it more sophisticated using pattern rules -- but this example gives you the gist of it.
You probably want to use ifdef or ifeq depending on what the final goal is. See the manual here for examples.
I might be late with an answer, but here is another way of doing such a dependency with Make conditional syntax (works on GNU Make 4.1, GNU bash, Bash on Ubuntu on Windows version 4.3.48(1)-release (x86_64-pc-linux-gnu)):
1 ifneq ($(shell cat config.sig 2>/dev/null),prefix $(CONFIG))
2 .PHONY: config.sig
3 config.sig:
4 #(echo 'prefix $(CONFIG)' >config.sig &)
5 endif
In the above sample we track the $(CONFIG) variable, writing it's value down to a signature file, by means of the self-titled target which is generated under condition when the signature file's record value is different with that of $(CONFIG) variable. Please, note the prefix on lines 1 and 4: it is needed to distinct the case, when signature file doesn't exist yet.
Of course, consumer targets specify config.sig as a prerequisite.