I'm using GNU Make 3.80.
Within my Makefile_1, I am invoking Makefile_2. In certain circumstances, Makefile_2 "throws" an error.
Is there a way for me to "catch" and "handle" (within Makefile_1) the error that Makefile_2 might possibly throw?
You have all the shell power you need:
target1:
${MAKE} -f Makefile_2 target2; \
case "$$?" in \
... \
esac;
Related
For my powerlevel10k custom prompt, I currently have this function to display the seconds since the epoch, comma separated. I display it under the current time so I always have a cue to remember roughly what the current epoch time is.
function prompt_epoch() {
MYEPOCH=$(/bin/date +%s | sed ':a;s/\B[0-9]\{3\}\>/,&/;ta')
p10k segment -f 66 -t ${MYEPOCH}
}
My prompt looks like this: https://imgur.com/0IT5zXi
I've been told I can do this without the forked processes using these commands:
$ zmodload -F zsh/datetime p:EPOCHSECONDS
$ printf "%'d" $EPOCHSECONDS
1,648,943,504
But I'm not sure how to do that without the forking. I know to add the zmodload line in my ~/.zshrc before my powerlevel10k is sourced, but formatting ${EPOCHSECONDS} isn't something I know how to do without a fork.
If I were doing it the way I know, this is what I'd do:
function prompt_epoch() {
MYEPOCH=$(printf "%'d" ${EPOCHSECONDS})
p10k segment -f 66 -t ${MYEPOCH}
}
But as far as I understand it, that's still forking a process every time the prompt is called, correct? Am I misunderstanding the advice given because I don't think I can see a way to get the latest epoch seconds without running some sort of process, which requires a fork, correct?
The printf zsh builtin can assign the value to a variable using the -v flag. Therefore my function can be rewritten as:
function prompt_epoch() {
printf -v MYEPOCH "%'d" ${EPOCHSECONDS}
p10k segment -f 66 -t ${MYEPOCH}
}
Thanks to this answer in Unix Stackoverflow: https://unix.stackexchange.com/a/697807/101884
Is there a way to allow failures to increment the fail count when using ignore_errors?
Edit: since it may be unclear for some, will someone let me know how to do this?
It looks like it is impossible without modifying Ansible code.
Failure count is increased by stats.increment('failures', ... method which is executed inside the if not ignore_errors: condition.
Otherwise (i.e., when ingore_errors: true) an OK-counter is increased:
stats.increment('ok', ....
If you are using some unix flavor
ansible-playbook pb.yml | tee >(grep -c FAILED)
this should print the number of failures
Without getting too much into the weeds, I have a Pentaho PDI job with multiple sub-transformations and sub-jobs (ETL from MySQL to Postgres). This job runs exactly as expected from Spoon, no errors, but when I run the job--with the following command--I am met with an endless loop error at the first step where a parameter would need to be defined and passed from within the job (the named params from the command seem to integrate fine). The command I am using is as follows:
sudo /bin/sh kitchen.sh \
-rep=KettleFileRepo \
-dir=M2P \
-job=ETL-M2P \
-level=Rowlevel \
-param:MY.PAR.LOADTYPE=full \
-param:MY.PAR.TABLELIST=table1 \
-param:MY.PAR.TENANTS=tenant1 \
/
Has anyone run into this type of issue with a discrepancy between Spoon and Kitchen? Is there some sort of config or command line option that I am missing? I am running version 6.0.1.0-386 on OS X 10.11.4.
If you think more details would be beneficial please let me know and I can provide whatever is necessary.
I am not aware of any discrepancy between Spoon and Kitchen. Are you sure, its not something in the ETL that causing the loop. I would suggest to go through your ETL in detail.
Another thing you can try to debug is run only part of the job in kitchen and keep adding more as you see success.
Im having an issue writing a DCL in OpenVMS in that I need the DCL to call a command and capture its output (but not output the output to the screen) Later on in the DCL I then need to print that output I stored.
Heres an example:
ICE SET FASTER !This command sets my environment to the "Faster" environment.
The above command outputs this if executed directly in OpenVMS:
Initialising TEST Environment to FASTER
--------------------------------------------------------------------------------
Using Test Search rules FASTER
Using Test Search rules FASTER
--------------------------------------------------------------------------------
dcl>
So I created a DCL in an attempt to wrap this output in order to display a more simplified output. Heres my code so far:
!************************************************************************
$ !* Wrapper for setting ICE account. Outputs Environment
$ !************************************************************************
$ on error then goto ABORT_PROCESS
$ICE_DCL_MAIN:
$ ice set 'P1'
$ ICE SHOW
$ EXIT
$ABORT_PROCESS:
$ say "Error ICING to: " + P1
$ EXIT 2
[End of file]
In the lines above ICE SET 'P1' is setting the ice environment, but I dont want this output to be echoed to VMS. But what I do want is to write the output of $ICE SHOW into a variable and then echo that out later on in the DCL (most of which ive omitted for simplification purposes)
So what should be outputted should be:
current Test Environment is DISK$DEVELOPERS:[FASTER.DEVELOP]
Instead of:
Initialising TEST Environment to FASTER
--------------------------------------------------------------------------------
Using Test Search rules FASTER
Using Test Search rules FASTER
--------------------------------------------------------------------------------
current Test Environment is DISK$DEVELOPERS:[FASTER.DEVELOP]
Ive had a look through the manual and im getting a bit confused so I figured I tried here. Id appreciate any pointers. Thanks.
EDIT
Here is what ive come up with after the comments, the problem im having is when I connect to VMS using an emulator such as SecureCRT the correct output is echoed. But when I run the DCL via my SSH2 library in .NET it doesnt output anything. I guess thats because it closes the SYS$OUTPUT stream temporarily or something?
$ !************************************************************************
$ !* Wrapper for setting ICE account. Outputs Environment
$ !************************************************************************
$ on error then goto ABORT_PROCESS
$ICE_DCL_MAIN:
$ DEFINE SYS$OUTPUT NL:
$ ice set 'P1'
$ DEASSIGN SYS$OUTPUT
$ ice show
$ EXIT
$ABORT_PROCESS:
$ say "Error ICING to: " + P1
$ EXIT 2
[End of file]
EDIT 2
So I guess really I need to clarify what im trying to do here. Blocking the output doesnt so matter so much, im merely trying to capture it into a Symbol for example.
So in C# for example you can have a method that returns a string. So you'd have string myResult = vms.ICETo("FASTER"); and it would return that and store it in the variable.
I guess im looking for a similar thing in VMS so that once ive iced to the environment I can call:
$ environment == $ICE SHOW
But I of course get errors with that statement
The command $ assign/user_mode Thing Sys$Output will cause output to be redirected to Thing until you $ deassign/user_mode Sys$Output or next executable image exits. An assignment without the /USER_MODE qualifier will persist until deassigned.
Thing can be a logical name, a file specification (LOG.TXT) or the null device (NLA0:) if you simply want to flush the output.
When a command procedure is executed the output can be redirected using an /OUTPUT qualifier, e.g. $ #FOO/output=LOG.TXT.
And then there is piping ... .
You can redirect the output to a temp file and then print its content later:
$ pipe write sys$output "hi" > tmp.tmp
$ ty tmp.tmp
VMS is not Unix, DCL is not Bash: you can not easily set a DCL symbol from the output of a command.
Your ICE SHOW prints one line, correct? The first word is always "current", correct?
So you can create a hack.
First let me fake your ICE command:
$ create ice.com
$ write sys$output "current Test Environment is DISK$DEVELOPERS:[FASTER.DEVELOP]"
^Z
$
and I define a dcl$path pointing to the directory where this command procedure is
so that I can use/fake the command ICE
$ define dcl$path sys$disk[]
$ ice show
current Test Environment is DISK$DEVELOPERS:[FASTER.DEVELOP]
$
Now what you need, create a command procedure which sets a job logical
$ cre deflog.com
$ def/job/nolog mylog "current''p1'"
^Z
$
And I define a command "current" to run that command procedure:
$ current="#deflog """
Yes, you need three of the double quotes at the end of the line!
And finally:
$ pipe (ice show | #sys$pipe) && mysym="''f$log("mylog")'"
$ sh symb mysym
MYSYM = "current Test Environment is DISK$DEVELOPERS:[FASTER.DEVELOP]"
$
On the other hand, I don't know what you are referring to C# and Java. Can you elaborate on that and tell us what runs where?
You can try using: DEFINE /USER SYS$OUTPUT NL:.
It works only for the next command and you dont need to deassign.
Sharing some of my experience here. I used below methods to redirect outputs to files.
Define/Assign the user output and then execute the required command/script afterwards. Output will be written to .
$define /user sys$output <file_path>
execute your command/script
OR
assign /user <file_path> sys$output
execute your command/script
deassign sys$output
To re-direct in to null device like in Unix (mentioned in above answers), you can use 'nl:' instead of
define /user sys$output nl:
or
assign /user nl: sys$output
I have an expect script which I need to run every 3 mins on my management node to collect tx/rx values for each port attached to DCX Brocade SAN Switch using the command #portperfshow#
Each time I try to use crontab to execute the script every 3 mins, the script does not work!
My expect script starts with #!/usr/bin/expect -f and I am calling the script using the following syntax under cron:
3 * * * * /usr/bin/expect -f /root/portsperfDCX1/collect-all.exp sanswitchhostname
However, when I execute the script (not under cron) it works as expected:
root# ./collect-all.exp sanswitchhostname
works just fine.
Please Please can someone help! Thanks.
The script collect-all.exp is:
#!/usr/bin/expect -f
#Time and Date
set day [timestamp -format %d%m%y]
set time [timestamp -format %H%M]
#logging
set LogDir1 "/FPerf/PortsLogs"
et timeout 5
set ipaddr [lrange $argv 0 0]
set passw "XXXXXXX"
if { $ipaddr == "" } {
puts "Usage: <script.exp> <ip address>\n"
exit 1
}
spawn ssh admin#$ipaddr
expect -re "password"
send "$passw\r"
expect -re "admin"
log_file "$LogDir1/$day-portsperfshow-$time"
send "portperfshow -tx -rx -t 10\r"
expect timeout "\n"
send \003
log_file
send -- "exit\r"
close
I had the same issue, except that my script was ending with
interact
Finally I got it working by replacing it with these two lines:
expect eof
exit
Changing interact to expect eof worked for me!
Needed to remove the exit part, because I had more statements in the bash script after the expect line (calling expect inside a bash script).
There are two key differences between a program that is run normally from a shell and a program that is run from cron:
Cron does not populate (many) environment variables. Notably absent are TERM, SHELL and HOME, but that's just a small proportion of the long list that will be not defined.
Cron does not set up a current terminal, so /dev/tty doesn't resolve to anything. (Note, programs spawned by Expect will have a current terminal.)
With high probability, any difficulties will come from these, especially the first. To fix, you need to save all your environment variables in an interactive session and use these in your expect script to repopulate the environment. The easiest way is to use this little expect script:
unset -nocomplain ::env(SSH_AUTH_SOCK) ;# This one is session-bound anyway
puts [list array set ::env [array get ::env]]
That will write out a single very long line which you want to put near the top of your script (or at least before the first spawn). Then see if that works.
Jobs run by cron are not considered login shells, and thus don't source your .bashrc, .bash_profile, etc.
If you want that behavior, you need to add it explicitly to the crontab entry like so:
$ crontab -l
0 13 * * * bash -c '. .bash_profile; etc ...'
$