dcl verify specific lines in DCL - vms

Using DCL, i have a .txt file with 3 lines
Line 1 test.
Line 2 test.
Line 3 test.
I'm trying to very that each contains exactly what is expected. I'm currently using the f#extract function which will give me the output of line 1 but i cannot figure out how to verify line 2 and 3. What function can i use to make sure lines 2 and 3 are correct?
$ OPEN read_test test.dat
$ READ/END_OF_FILE=ender read_test cc
$ line1 = f$extract(0,15,cc)
$ if line1.nes."Line 1 test."
$ then
$ WRITE SYS$OUTPUT "FALSE"
$ endif
$ line2 = f$extract(??,??,cc) ! f$extract not possible for multiple lines?
$ if line2.nes."Line 2 test."
$ then
$ WRITE SYS$OUTPUT "FALSE"
$ endif

For exactly 3 line you might just want to do 3 reads and 3 compares...
$ READ READ/END_OF_FILE=ender read_test cc
$ if f$extract(0,15,cc).nes."Line 1 test." ...
$ READ READ/END_OF_FILE=ender read_test cc
$ if f$extract(0,15,cc).nes."Line 2 test." ...
$ READ READ/END_OF_FILE=ender read_test cc
$ if f$extract(0,15,cc).nes."Line 3 test." ...
Any more and you want to be in a loop as replied.
TO follow up on Chris's approach, you may want to first prepare an array of values and then loop reading and comparing as long as there are values.
Untested:
$ line_1 = "Line 1 test."
$ line_2 = "Line 2 test."
$ line_3 = "Line 3 test."
$ line_num = 1
$ReadNext:
$ READ/END_OF_FILE=ender read_test cc
$ if line_'line_num'.nes.cc then WRITE SYS$OUTPUT "Line ", line_num, " FALSE"
$ line_num = line_num + 1
$ if f$type(line_'line_num').NES."" then GOTO ReadNext
$ WRITE SYS$OUTPUT "All provided lines checked out TRUE"
$ GOTO end
$Ender:
$ WRITE SYS$OUTPUT "Ran out of lines too soon. FALSE"
$end:
$ close Read_Test
hth,
Hein.

Try this variation (untested, so may need a little debugging). Makes use of symbol substitution to keep track of which line you are up to.
$ OPEN read_test test.dat
$ line_num = 1
$ ReadNext:
$ READ/END_OF_FILE=ender read_test cc
$ line'line_num' = f$extract(0,15,cc)
$ if line'line_num'.nes."Line ''line_num' test."
$ then
$ WRITE SYS$OUTPUT "FALSE"
$ endif
$ goto ReadNext
$ !
$ Ender:
$ close Read_Test
$ write sys$output "line1: "+line1
$ write sys$output "line2: "+line2
$ write sys$output "line3: "+line3
$ exit

Related

shell script to exit out of the script if diskspace is more than 75

I want a script to exit out of the script if disk space is beyond threshold(ex:75%). Trying below things, But no luck.
df -kh | awk '{if( 0+$5 >= 75 ) exit;}'
Trying above command, its not working. Can anyone help me on this.
This is because your df output is NOT coming in a single line or so, to make this you need to add -P option with it try following once.
df -hP | awk '{if( 0+$5 >= 75 ){print "exiting now..";exit 1}}'
OR
df -hP | awk '$5+0>=75{print "exiting now..";exit 1}'
OR with mount name who is the culprit for breaching threshold.
df -hP | awk '$5+0>=75{print "Mount " $1 " has crossed threshold so exiting now..";exit 1}'
In case you don't have -P option in your box then try following.
df -k | awk '/^ +/ && $4+0>=75{print "Mount " prev" has crossed threshold so exiting now..";exit 1} !/^ +/{prev=$0}'
I am using print statement to make sure exit is working. also -P option was tested on BASH systems.
Since OP told he needs to exit from complete script itself so requesting OP to add following code outside of for loop of his code.(I haven't tested it though but this should work)
if [[ $? -eq 1 ]]
then
echo "Exiting the complete script now..."
exit
else
echo "Looks good so going further in script now.."
fi
If you are using this in a script to exit the script (as opposed to exiting a long awk script) then you need to call exit from the outer script:
if df -kh | awk '{if ($5+0 > 75) exit 1 }'; then echo OK; else echo NOT; fi
Don't forget that df returns one line per mount point, you can do:
if dk -kh /home ....
to check for a particular mount point.

How do I correctly retrieve, using bash' cut, the first field from a line with only 1 field in a text file?

In a text file (accounts.txt) with (financial) accounts the sub-accounts are, and need to be, separated by an underscore, looking like this:
assets
assets_hh
assets_hh_reimbursements
assets_hh_reimbursements_ff
... etc.
Now I want to get specific sub-accounts from specific line numbers, e.g.:
field 3 from line 4:
$ lnr=4; fnr=3
$ cut -d $'\n' -f "$lnr" < accounts.txt | cut -d _ -f "$fnr"
reimbursements
$
But both fnr=1 and fnr=2 give for the first line, which has only 1 field:
$ cut -d $'\n' -f 1 < accounts.txt | cut -d _ -f "fnr"
assets
$
which is undesired behaviour.
Now I can get around this by prefixing an underscore to each account and add 1 to each required field number, but this is not an elegant solution.
Am I doing something wrong and/or can this be changed by issuing a different retrieval command?
Using the cut -d $'\n' -f "$lnr" for getting the lnr-th line from the file is somewhat strange. More common approach is using sed, like:
sed -n "${lnr}p" file | cmd ...
However, for this the awk is better - in one invocation could handle the lnr and fnr too.
file=accounts.txt
lnr=1
fnr=2
awk -F_ -v l=$lnr -v f=$fnr 'NR==l{print $f}' "$file"
The above for the all combinations lnr/fnr produces:
line field1 field2 field3 field4
------------------------------------------------------------------------
assets assets
assets_hh assets hh
assets_hh_reimbursements assets hh reimbursements
assets_hh_reimbursements_ff assets hh reimbursements ff
Check below solution -
cat f
assets
assets_hh
assets_hh_reimbursements
assets_hh_reimbursements_ff
Based on your comment try below commands -
$ lnr=1; fnr=2
$ echo $lnr $fnr
1 2
$ awk -v lnr=$lnr -v fnr=$fnr -F'_' 'NR==lnr {print $fnr}' f
###Output is nothing as line 1 column 2 is blank when FS="_"
$ lnr=4;fnr=1
$ echo $lnr $fnr
4 1
$ awk -v lnr=$lnr -v fnr=$fnr -F'_' 'NR==lnr {print $fnr}' f
assets
$ lnr=4;fnr=3
$ echo $lnr $fnr
4 3
$ awk -v lnr=$lnr -v fnr=$fnr -F'_' 'NR==lnr {print $fnr}' f
reimbursements
One solution is to head|tail and read into an array so it's easier to work with the items:
lnr=4
fnr=2
IFS=_ read -r -a arr < <(head -n "$lnr" accounts.txt | tail -n 1)
#note that the array is 0-indexed, so the fieldnumber has to fit that
echo "${arr[$fnr]}"
Then you could expand the idea into a more usable function:
get_field_from_file() {
local fname="$1"
local lnr="$2"
local fnr="$3"
IFS=_ read -r -a arr < <(head -n "$lnr" "$fname" | tail -n 1)
if (( $fnr > ${#arr[#]} )); then
return 1
else
echo "${arr[$fnr]}"
fi
}
field=$(get_field_from_file "accounts.txt" "4" "2") || echo "no such line or field"
[[ -n $field ]] && echo "field: $field"

copy to clipboard the last n commands in a terminal

I want to copy to clipboard something like this
$ command1
$ command2
If you run history you will get the commands in reverse order, so I want to just skip a number of lines from the tail and replace the entry line number with '$'. As you probably suspect this is a very useful shorthand when having to log your workflow or write documentation.
Example:
$ history
1340 pass
1341 pass insert m clouds/cloud9
1342 pass insert -m clouds/cloud9
1343 sudo service docker start
1344 history
So how do you turn that into:
$ sudo service docker start
$ pass insert -m clouds/cloud9
...etc
Assigning $1 works but it will leave a leading space
history | awk '{$1=""; print}'
If you want to copy this to the clipboard, you can use xclip
history | awk '{$1=""; print}' | xclip
Credit goes to https://stackoverflow.com/a/4198169/2032943
maybe you can use these;
history | tac | awk 'NR>1&&NR<=3 {$1="$";print $0}'
tac - concatenate and print files in reverse
NR<=3 : means that the last two commands before history.
NR>1 : to delete history command in history
$1="$" : to replace line numbers to $
test :
$ echo first
first
$ echo second
second
$ history | tac | awk 'NR>1&&NR<=3 {$1="$";print $0}'
$ echo second
$ echo first

OpenVMS - DELETE Line if TEXT like x

I have a batch script that is writing all files inclusive of path and version number to a TMP file for a physical device. This batch script then writes any lines that demonstrate a file version number greater than a provided variable.
DIRECTORYX := DIRECTORY /NODATE /NOHEADING /NOTRAILING
DIR_OUTPUT_FILENAME = "TOPAS$UTILLOG:" + F$UNIQUE() + ".TMP"
DEVICE = $1$DGA1112
DIRECTORYX 'DEVICE':[000000...]*.*;* /NOSIZE /OUTPUT='DIR_OUTPUT_FILENAME'
At the same time I have files to which I am not interested in being reported back to me. In this case i want to remove any lines that potentially contain this filename (filename.ext;) fromt he TMP file so that the batch script can continue through it and only report files that i dont explicity want to ignore.
How would I go about reading the file while inside a loop using 'IGNORE_FILE' as the variable for the text string to match and remove the associated line of text so that when my batch script proceeds through the file it will not report files requested to be ignored.
Many Thanks for any help
Allright... Now I see where you coming from.
How would i write everything from the 7th , to end of line?
Well, you could just loop starting with i=7, or you could use a "quote" the string and use the quote as new separator. Here is an example with both, using a double-quote as almost natural second separator choice
$
$ READ SYS$INPUT BUFF
SHG101,$1$DGA1101:,25,15,10,5000,100,X.TMP,Y.DAT
$
$ excludes = ""
$ tmp = F$ELEMENT(7, ",", BUFF)
$ IF tmp.NES.","
$ THEN
$ i = 8
$ excludes = tmp
$exclude_loop:
$ tmp = F$ELEMENT(i, ",", BUFF)
$ IF tmp.NES.","
$ THEN
$ excludes = excludes + "," + tmp
$ i = i + 1
$ GOTO exclude_loop
$ ENDIF
$ excludes = "/EXCLUDE=(" + excludes + ")"
$ ENDIF
$
$ SHOW SYMB excludes
$
$! Using a different delimitor:
$ READ SYS$INPUT BUFF
SHG101,$1$DGA1101:,25,15,10,5000,100,"X.TMP,Y.DAT"
$
$ excludes = ""
$ tmp = F$ELEMENT(1, """", BUFF)
$ IF tmp.NES."," THEN excludes = "/EXCLUDE=(" + tmp + ")"
$
$ SHOW SYMB excludes
In the code we see:
Checking ''DEVICE' for high file versions (>= ;''HVERNO') - may take some time...>
I urge you to check out DFU
The big loop code "READ FH3 /END_OF_FILE=LABEL$_EOF_DIRLIST1 BUFF2..."
will simplify to:
dfu searc/versio=min=200'excludes'form="!AS"/out=tmp.tmp dka0:
This will run in seconds almost no matter how many files.
Toss the whole warning stuff, include it always checking for 32000 as it is (almost) free.
After executing the DFU command, create the 'fancy' output if tmp.tmp is not empty by creating your header, and appending tmp.tmp. Always delete tmp.tmp ($ CLOSE/DISP=DELETE )
Free advice...
Those 15 deep nested IF-THEN-ELSE-ENDIF to pick up a message looks terrible (to maintain)
Consider an array lookup ?!
Here is a worked out example:
$! prep for test
$ EL = p1
$ EL_DIAG = "FILE.DAT"
$ LOCAL_STATUS = "12345"
$
$! Code snippet to be tested
$
$ x = 'EL'
$ ! Fold multiple conditions into 1 message
$
$ if (EL .EQ. 7) .OR. (EL .EQ. 14) .OR. (EL .EQ. 21) .OR. (EL .EQ. 25) -
.OR. (EL .EQ. 29) .OR. (EL .EQ. 30) THEN x = "L1"
$
$ MSG_6 = "error opening " + EL_DIAG + " for WRITE (RM=" + LOCAL_STATUS + ")"
$ IDT_6 = "OPENIN"
$ MSG_L1 = "error reading from " + EL_DIAG + " (RM=" + LOCAL_STATUS + ")"
$ IDT_L1 = "READERR"
$ MSG_8 = "device name missing " + EL_DIAG
$ IDT_8 = "DNNF"
$
$ ! Pick up the required texts
$
$ IF F$TYPE(MSG_'x').EQS.""
$ THEN
$ WRITE SYS$OUTPUT "No message found for code: ", EL
$ EXIT 16
$ ENDIF
$
$ MSG = MSG_'x
$ IDTXT = IDT_'x
$
$ WRITE SYS$OUTPUT "MSG : ", MSG
$ WRITE SYS$OUTPUT "IDTXT: ", IDTXT
Cheers,
Hein
Both comments are a great start. Check them carefully.
Which OpenVMS version? Something from the last 2 decades?
Just grab and use DFU !
$ define/user sys$output nl:
$ mcr dfu searc/versio=min=200/excl=(*.dat,*.tmp)/form="!AS"/out=tmp.tmp dka0:
$ type tmp.tmp
BUNDY$DKA0:[SYS0.SYSMGR]OPERATOR.LOG;242
BUNDY$DKA0:[SYS0.SYSMGR]ACME$SERVER.LOG;241
BUNDY$DKA0:[SYS0.SYSMGR]LAN$ACP.LOG;241
You could also consider sticking F$SEARCH in a loop and parse out version and other interesting components to implement your excludes....
$
$ type SEARCH_HIGH_VERSION.COM
$ max = 200
$ old = ""
$ IF p1.EQS."" THEN EXIT 16 ! PARAM
$loop:
$ file = F$SEARC(p1)
$ IF file.EQS."" THEN EXIT 99018 ! NMF
$ IF file.EQS.old THEN EXIT 100164 ! Not wild
$ old = file
$ version = F$PARSE(file,,,"VERSION") - ";"
$ IF max.GE.'version' THEN GOTO loop
$ ! IF ... further string tests
$ WRITE SYS$OUTPUT file
$ GOTO LOOP
$
$ #SEARCH_HIGH_VERSION.COM *.*;*
SYS$SYSROOT:[SYSMGR]ACME$SERVER.LOG;241
SYS$SYSROOT:[SYSMGR]LAN$ACP.LOG;241
SYS$SYSROOT:[SYSMGR]OPERATOR.LOG;242
SYS$SYSROOT:[SYSMGR]TMP.TMP;304
SYS$SYSROOT:[SYSMGR]TMP.TMP;303
SYS$SYSROOT:[SYSMGR]TMP.TMP;302
SYS$SYSROOT:[SYSMGR]TMP.TMP;301
SYS$SYSROOT:[SYSMGR]TMP.TMP;300
%RMS-E-NMF, no more files found
$
$ #SEARCH_HIGH_VERSION.COM tmp.tmp
SYS$SYSROOT:[SYSMGR]TMP.TMP;304
%RMS-F-WLD, invalid wildcard operation
$
DFU will probably be 10* faster than DIR
DIR will be 10* faster that F$SEARCH, but you'll loose that in the processing.
Good luck!
Hein

How to read a file AND make a query at each line?

I have a file .csv and would like to add information to it. The delimiter is ; .
I had an idea to do so but it didn't really work. It was :
cat file.csv | awk -F ";" '{ print ""$1" "$2" "$3 }' > temp.csv
cat temp.csv | while read info1 info2 info3
do
read -p 'Give another information : " INFO
echo "$info1 ; $info2 ; $info3 ; INFO" >> new_file.csv
done
Everything works except "read -p" within the "while"...
I was wondering if I should try only using an awk command but I actually don't really master this command so....Maybe someone has an idea to help me with that problem.
awk '
BEGIN{FS=OFS=";"}
{printf "Enter new stuff for line : " $0 "\n";
getline newstuff < "-";
print $0,newstuff > "newfile" }' file
Test:
$ cat file
this;is;my;line;one
this;is;my;line;two
this;is;my;line;three
$ awk 'BEGIN{FS=OFS=";"}{printf "Enter new stuff for line : " $0 "\n"; getline newstuff < "-"; print $0,newstuff > "newfile" }' file
Enter new stuff for line : this;is;my;line;one
hola
Enter new stuff for line : this;is;my;line;two
hello
Enter new stuff for line : this;is;my;line;three
bonjour
$ cat newfile
this;is;my;line;one;hola
this;is;my;line;two;hello
this;is;my;line;three;bonjour
Entirely within awk:
awk 'BEGIN{FS=OFS=";"}{printf "prompt? ";getline info <"/dev/tty";print $1,$2,$3,info > "output"}'
Do not try to do this all in awk, as an solution like that would be error prone and/or unnecessarily complicated.
Try this bash script:
> new_file
exec 3<file
while IFS=';' read -r f1 f2 f3 rest
do
read -p "other info? " info <&1
echo "$f1;$f2;$f3;$info" >> new_file
done <&3
e.g.:
$ > new_file
exec 3<file
while IFS=';' read -r f1 f2 f3 rest
do
read -p "other info? " info <&1
echo "$f1;$f2;$f3;$info" >> new_file
done <&3
other info? line 1
other info? line 2
other info? line 3
$ cat file
1;2;3;4
5;6;7;8
9;10;11;1
$ cat new_file
1;2;3;line 1
5;6;7;line 2
9;10;11;line 3