AWK script- Not showing data - variables
I'm trying to create a variable to sum columns 26 to 30 and 32.
SO far I have this code which prints me the hearder and the output format like I want but no data is being shown.
#! /usr/bin/awk -f
BEGIN { FS="," }
NR>1 {
TotalPositiveStats= ($26+$27+$28+$29+$30+$32)
}
{printf "%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%.2f %,%s,%s,%.2f %,%s,%s,%.2f %,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s, %s\n",
EndYear,Rk,G,Date,Years,Days,Age,Tm,Home,Opp,Win,Diff,GS,MP,FG,FGA,FG_PCT,3P,3PA,3P_PCT,FT,FTA,FT_PCT,ORB,DRB,TRB,AST,STL,BLK,TOV,PF,PTS,GmSc,TotalPositiveStats
}
NR==1 {
print "EndYear,Rk,G,Date,Years,Days,Age,Tm,HOme,Opp,Win,Diff,GS,MP,FG,FGA,FG_PCT,3P,3PA,3P_PCT,FT,FTA,FT_PCT,ORB,DRB,TRB,AST,STL,BLK,TOV,PF,PTS,GmSc,TotalPositiveStats" }#header
Input data:
EndYear,Rk,G,Date,Years,Days,Age,Tm,Home,Opp,Win,Diff,GS,MP,FG,FGA,FG_PCT,3P,3PA,3P_PCT,FT,FTA,FT_PCT,ORB,DRB,TRB,AST,STL,BLK,TOV,PF,PTS,GmSc
1985,1,1,10/26/1984,21,252,21.6899384,CHI,1,WSB,1,16,1,40,5,16,0.313,0,0,,6,7,0.857,1,5,6,7,2,4,5,2,16,12.5
1985,2,2,10/27/1984,21,253,21.69267625,CHI,0,MIL,0,-2,1,34,8,13,0.615,0,0,,5,5,1,3,2,5,5,2,1,3,4,21,19.4
1985,3,3,10/29/1984,21,255,21.69815195,CHI,1,MIL,1,6,1,34,13,24,0.542,0,0,,11,13,0.846,2,2,4,5,6,2,3,4,37,32.9
1985,4,4,10/30/1984,21,256,21.7008898,CHI,0,KCK,1,5,1,36,8,21,0.381,0,0,,9,9,1,2,2,4,5,3,1,6,5,25,14.7
1985,5,5,11/1/1984,21,258,21.7063655,CHI,0,DEN,0,-16,1,33,7,15,0.467,0,0,,3,4,0.75,3,2,5,5,1,1,2,4,17,13.2
1985,6,6,11/7/1984,21,264,21.72279261,CHI,0,DET,1,4,1,27,9,19,0.474,0,0,,7,9,0.778,1,3,4,3,3,1,5,5,25,14.9
1985,7,7,11/8/1984,21,265,21.72553046,CHI,0,NYK,1,15,1,33,15,22,0.682,0,0,,3,4,0.75,4,4,8,5,3,2,5,2,33,29.3
Output expected:
EndYear,Rk,G,Date,Years,Days,Age,Tm,Home,Opp,Win,Diff,GS,MP,FG,FGA,FG_PCT,3P,3PA,3P_PCT,FT,FTA,FT_PCT,ORB,DRB,TRB,AST,STL,BLK,TOV,PF,PTS,GmSc,TotalPositiveStats
1985,1,1,10/26/1984,21,252,21.6899384,CHI,1,WSB,1,16,1,40,5,16,0.313,0,0,,6,7,0.857,1,5,6,7,2,4,5,2,16,12.5,35
1985,2,2,10/27/1984,21,253,21.69267625,CHI,0,MIL,0,-2,1,34,8,13,0.615,0,0,,5,5,1,3,2,5,5,2,1,3,4,21,19.4,34
1985,3,3,10/29/1984,21,255,21.69815195,CHI,1,MIL,1,6,1,34,13,24,0.542,0,0,,11,13,0.846,2,2,4,5,6,2,3,4,37,32.9,54
1985,4,4,10/30/1984,21,256,21.7008898,CHI,0,KCK,1,5,1,36,8,21,0.381,0,0,,9,9,1,2,2,4,5,3,1,6,5,25,14.7,38
1985,5,5,11/1/1984,21,258,21.7063655,CHI,0,DEN,0,-16,1,33,7,15,0.467,0,0,,3,4,0.75,3,2,5,5,1,1,2,4,17,13.2,29
1985,6,6,11/7/1984,21,264,21.72279261,CHI,0,DET,1,4,1,27,9,19,0.474,0,0,,7,9,0.778,1,3,4,3,3,1,5,5,25,14.9,36
1985,7,7,11/8/1984,21,265,21.72553046,CHI,0,NYK,1,15,1,33,15,22,0.682,0,0,,3,4,0.75,4,4,8,5,3,2,5,2,33,29.3,51
This script will be called like gawk -f script.awk <filename>.
Currently when calling this is the output (It seems to be calculating the variable but the rest of fields are empty)
awk is well suited to summing columns:
awk 'NR>1{$(NF+1)=$26+$27+$28+$29+$30+$32}1' FS=, OFS=, input-file > tmp
mv tmp input-file
That doesn't add a field in the header line, so you might want something like:
awk '{$(NF+1) = NR>1 ? ($26+$27+$28+$29+$30+$32) : "TotalPositiveStats"}1' FS=, OFS=,
An explanation on the issues with the current printf output is covered in the 2nd half of this answer (below).
It appears OP's objective is to reformat three of the current fields while also adding a new field on the end of each line. (NOTE: certain aspects of OPs code are not reflected in the expected output so I'm not 100% sure what OP is looking to generate; regardless, OP should be able to tweak the provided code to generate the desired result)
Using sprintf() to reformat the three fields we can rewrite OP's current code as:
awk '
BEGIN { FS=OFS="," }
NR==1 { print $0, "TotalPositiveStats"; next }
{ TotalPositiveStats = ($26+$27+$28+$29+$30+$32)
$17 = sprintf("%.3f",$17) # FG_PCT
if ($20 != "") $20 = sprintf("%.3f",$20) # 3P_PCT
$23 = sprintf("%.3f",$23) # FT_PCT
print $0, TotalPositiveStats
}
' raw.dat
NOTE: while OP's printf shows a format of %.2f % for the 3 fields of interest ($17, $20, $23), the expected output shows that the fields are not actually being reformatted (eg, $17 remains %.3f, $20 is an empty string, $23 remains %.2f); I've opted to leave $20 as blank otherwise reformat all 3 fields as %.3f; OP can modify the sprintf() calls as needed
This generates:
EndYear,Rk,G,Date,Years,Days,Age,Tm,Home,Opp,Win,Diff,GS,MP,FG,FGA,FG_PCT,3P,3PA,3P_PCT,FT,FTA,FT_PCT,ORB,DRB,TRB,AST,STL,BLK,TOV,PF,PTS,GmSc,TotalPositiveStats
1985,1,1,10/26/1984,21,252,21.6899384,CHI,1,WSB,1,16,1,40,5,16,0.313,0,0,,6,7,0.857,1,5,6,7,2,4,5,2,16,12.5,40
1985,2,2,10/27/1984,21,253,21.69267625,CHI,0,MIL,0,-2,1,34,8,13,0.615,0,0,,5,5,1.000,3,2,5,5,2,1,3,4,21,19.4,37
1985,3,3,10/29/1984,21,255,21.69815195,CHI,1,MIL,1,6,1,34,13,24,0.542,0,0,,11,13,0.846,2,2,4,5,6,2,3,4,37,32.9,57
1985,4,4,10/30/1984,21,256,21.7008898,CHI,0,KCK,1,5,1,36,8,21,0.381,0,0,,9,9,1.000,2,2,4,5,3,1,6,5,25,14.7,44
1985,5,5,11/1/1984,21,258,21.7063655,CHI,0,DEN,0,-16,1,33,7,15,0.467,0,0,,3,4,0.750,3,2,5,5,1,1,2,4,17,13.2,31
1985,6,6,11/7/1984,21,264,21.72279261,CHI,0,DET,1,4,1,27,9,19,0.474,0,0,,7,9,0.778,1,3,4,3,3,1,5,5,25,14.9,41
1985,7,7,11/8/1984,21,265,21.72553046,CHI,0,NYK,1,15,1,33,15,22,0.682,0,0,,3,4,0.750,4,4,8,5,3,2,5,2,33,29.3,56
NOTE: in OP's expected output it appears the last/new field (TotalPositiveStats) does not contain the value from $30 hence the mismatch between the expected results and this answer; again, OP can modify the assignment statement for TotalPositiveStats to include/exclude fields as needed
Regarding the issues with the current printf ...
{printf "%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%.2f %,%s,%s,%.2f %,%s,%s,%.2f %,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s, %s\n",
EndYear,Rk,G,Date,Years,Days,Age,Tm,Home,Opp,Win,Diff,GS,MP,FG,FGA,FG_PCT,3P,3PA,3P_PCT,FT,FTA,FT_PCT,ORB,DRB,TRB,AST,STL,BLK,TOV,PF,PTS,GmSc,TotalPositiveStats}
... is referencing (awk) variables that have not been defined (eg, EndYear, Rk, G). [NOTE: one exeception is the very last variable in the list - TotalPositiveStats - which has in fact been defined earlier in the script.]
The default value for undefined variables is the empty string ("") or zero (0), depending on how the awk code is referencing the variable, eg:
printf "%s", EndYear => EndYear is treated as a string and the printed result is an empty string; with an output field delimiter of a comma (,) this empty strings shows up as 2 commas next to each other (,,)
printf "%.2f %", FG_PCT => FG_PCT is treated as a numeric (because of the %f format) and the printed result is 0.00 %
Where it gets a little interesting is when the (undefined) variable name starts with a numeric (eg, 3P) in which case the P is ignored and the entire reference is treated as a number, eg:
printf "%s", 3P => 3P is processed as 3 and the printed result is 3
This should explain the 5 static values (0.00 %, 3, 3, 3.00 % and 0.00 %) printed in all output lines as well as the 'missing' values between the rest of the commas (eg, ,,,,).
Obviously the last value in the line is an actual number, ie, the value of the awk variable TotalPositiveStats.
Related
Bash: Finding average of entries from multiple columns after reading a CSV text file
I am trying read a CSV text file and find average of weekly hours (columns 3 through 7) spent by all user-ids (column 2) ending with an even number (2,4,6,...). The input sample is as below: Computer ID,User ID,M,T,W,T,F Computer1,User3,5,7,3,5,2 Computer2,User5,8,8,8,8,8 Computer3,User4,0,8,0,8,4 Computer4,User1,5,4,5,5,8 Computer5,User2,9,8,10,0,0 Computer6,User7,4,7,8,2,5 Computer7,User6,8,8,8,0,0 Computer8,User9,5,2,0,6,8 Computer9,User8,2,5,7,3,6 Computer10,User10,8,9,9,9,10 I have written the following script: awk -F, '$2~/[24680]$/{for(i=3;i<=7;i++){a+=$i};printf "%s\t%.2g\n",$2,a/5;a=0}' user-list.txt > superuser.txt The output of this script is: User4 4 User2 5.4 User6 4.8 User8 4.6 User10 9 However, I want to change the script to only print one average for all user-Ids ending with an even number. The desired output for this would be as below (which is technically the average of all hours for the IDs ending with even numbers): 5.56 Any help would be appreciated. TIA
Trying to fix OP's attempt here and adding logic to get average of averages at last of the file's reading. Written on mobile so couldn't test it should work in case I got the thought correct by OP's description. awk -F, ' $2~/[24680]$/{ count++ for(i=3;i<=7;i++){ sum+=$i } tot+=sum/5 sum=0 } END{ print "Average of averages is: " (count?tot/count:"NaN") } ' user-list.txt > superuser.txt
You may try: awk -F, '$2 ~ /[02468]$/ { for(i=3; i<=7; i++) { s += $i ++n } } END { if (n) printf "%.2f\n", s/n }' cust.csv 5.56
awk -F, 'NR == 1 { next } { match($2,/[[:digit:]]+/);num=substr($2,RSTART,RLENGTH);if(num%2==0) { av+=($3+$4+$5+$6+$7)/5 } } END { printf "%.2f\n",av/5}' user-list.txt Ignore the first header like. Pick the number out of the userid with awk's match function. Set the num variable to this number. Check to see if the number is even with num%2. If it is average, set the variable av to av plus the average. At the end, print the average to 2 decimal places.
Print the daily average, for all even numbered user IDs: #!/bin/sh awk -F , ' (NR>1) && ($2 ~ /[02468]$/) { hours += ($3 + $4 + $5 + $6 + $7) (users++) } END { print (hours/users/5) }' \ "$1" Usage example: $ script user-list 5.56 One way to get evenness or oddness of an integer is to use modulus (%), as in N % 2. For even values of N, this sum evaluates to zero, and for odd values, it evaluates to 1. However in this case, a string operation would be required to extract the number any way, so may as well just use a single string match, to get odd or even. Also, IMO, for 5 fields, which are not going to change (days of the week), it's more succinct to just add them directly, instead of a loop. (NR>1) skips the titles line too, in case there's a conflict. Finally, you can of of course swap /[02468]$/ for /[13579]$/ to get the same data, for odd numbered users.
How to append last column of every other row with the last column of the subsequent row
I'd like to append every other (odd-numbered rows) row with the last column of the subsequent row (even-numbered rows). I've tried several different commands but none seem to do the task I'm trying to achieve. Raw data: user|396012_232|program|30720Mn| |396012_232.batch|batch|30720Mn|5108656K user|398498_2|program|102400Mn| |398498_2.batch|batch|102400Mn|36426336K user|391983_233|program|30720Mn| |391983_233.batch|batch|30720Mn|5050424K I'd like to take the last field in the "batch" lines and append the line above it with the last field in the "batch" line. Desired output: user|396012_232|program|30720Mn|5108656K |396012_232.batch|batch|30720Mn|5108656K user|398498_2|program|102400Mn|36426336K |398498_2.batch|batch|102400Mn|36426336K user|391983_233|program|30720Mn|5050424K |391983_233.batch|batch|30720Mn|5050424K The "batch" lines would then be discarded from the output, so in those lines there is no preference if the line is cut or copied or changed in any way. Where I got stumped, my attempts to finish the logic were embarrassingly illogical: awk 'BEGIN{OFS="|"} {FS="|"} {if ($3=="batch") {a=$5} else {} ' file.data Thanks!
If you do not need to keep the lines with batch in Field 3, you may use awk 'BEGIN{OFS=FS="|"} NR%2==1 { prev=$0 }; $3=="batch" { print prev $5 }' file.data or awk 'BEGIN{OFS=FS="|"} NR%2==1 { prev=$0 }; NR%2==0 { print prev $5 }' file.data See the online awk demo and another demo. Details BEGIN{OFS=FS="|"} - sets the field separator to pipe NR%2==1 { prev=$0 }; - saves the odd lines in prev variable $3=="batch" - checks if Field 3 is equal to batch (probably, with this logic you may replace it with NR%2==0 to get the even line) { print prev $5 } - prints the previous line and Field 5. You may consider also a sed option: sed 'N;s/\x0A.*|\([^|]*\)$/\1/' file.data > newfile See this demo Details N; - adds a newline to the pattern space, then appends the next line of input to the pattern space and if there is no more input then sed exits without processing any more commands s/\x0A.*|\([^|]*\)$/\1/ - replaces with Group 1 contents a \x0A - newline .*| - any 0+ chars up to the last | and \([^|]*\) - (Capturing group 1): any 0+ chars other than | $ - end of line
if your data in 'd' file try gnu awk: awk 'BEGIN{FS="|"} {if(getline n) {if(n~/batch/){b=split(n,a,"|");print $0 a[b]"\n"n} } }' d
Duplicate Lines 2 times and transpose from row to column
I will like to duplicate each line 2 times and print values of column 5 and 6 separated.( transpose values of column 5 and 6 from column to row ) for each line I mean value on column 5 (first line) value in column 6 ( second line) Input File 08,1218864123180000,3201338573,VV,22,27 08,1218864264864000,3243738789,VV,15,23 08,1218864278580000,3244738513,VV,3,13 08,1218864310380000,3243938789,VV,15,23 08,1218864324180000,3244538513,VV,3,13 08,1218864334380000,3200538561,VV,22,27 Desired Output 08,1218864123180000,3201338573,VV,22 08,1218864123180000,3201338573,VV,27 08,1218864264864000,3243738789,VV,15 08,1218864264864000,3243738789,VV,23 08,1218864278580000,3244738513,VV,3 08,1218864278580000,3244738513,VV,13 08,1218864310380000,3243938789,VV,15 08,1218864310380000,3243938789,VV,23 08,1218864324180000,3244538513,VV,3 08,1218864324180000,3244538513,VV,13 08,1218864334380000,3200538561,VV,22 08,1218864334380000,3200538561,VV,27 I use this code to duplicate the lines 2 times, but i cant'n figer out the condition with values of column 5 and 6 awk '{print;print}' file Thanks in advance
To repeatedly print the start of a line for each of the last N fields where N is 2 in this case: $ awk -v n=2 ' BEGIN { FS=OFS="," } { base = $0 sub("("FS"[^"FS"]+){"n"}$","",base) for (i=NF-n+1; i<=NF; i++) { print base, $i } } ' file 08,1218864123180000,3201338573,VV,22 08,1218864123180000,3201338573,VV,27 08,1218864264864000,3243738789,VV,15 08,1218864264864000,3243738789,VV,23 08,1218864278580000,3244738513,VV,3 08,1218864278580000,3244738513,VV,13 08,1218864310380000,3243938789,VV,15 08,1218864310380000,3243938789,VV,23 08,1218864324180000,3244538513,VV,3 08,1218864324180000,3244538513,VV,13 08,1218864334380000,3200538561,VV,22 08,1218864334380000,3200538561,VV,27
In this simple case where the last field has to be removed and placed on the last line, you can do awk -F , -v OFS=, '{ x = $6; NF = 5; print; $5 = x; print }' Here -F , and -v OFS=, will set the input and output field separators to a comma, respectively, and the code does { x = $6 # remember sixth field NF = 5 # Set field number to 5, so the last one won't be printed print # print those first five fields $5 = x # replace value of fifth field with remembered value of sixth print # print modified line } This approach can be extended to handle fields in the middle with a function like the one in the accepted answer of this question. EDIT: As Ed notes in the comments, writing to NF is not explicitly defined to trigger a rebuild of $0 (the whole-line record that print prints) in the POSIX standard. The above code works with GNU awk and mawk, but with BSD awk (as found on *BSD and probably Mac OS X) it fails to do anything. So to be standards-compliant, we have to be a little more explicit and force awk to rebuild $0 from the modified field state. This can be done by assigning to any of the field variables $1...$NF, and it's common to use $1=$1 when this problem pops up in other contexts (for example: when only the field separator needs to be changed but not any of the data): awk -F , -v OFS=, '{ x = $6; NF = 5; $1 = $1; print; $5 = x; print }' I've tested this with GNU awk, mawk and BSD awk (which are all the awks I can lay my hands on), and I believe this to be covered by the awk bit in POSIX where it says "setting any other field causes the re-evaluation of $0" right at the top. Mind you, the spec could be more explicit on this point, and I'd be interested to test if more exotic awks behave the same way.
Could you please try following(considering that your Input_file always is same as shown and you need to print every time 1st four fields and then rest of the fields(one by one printing along with 1st four)). awk 'BEGIN{FS=OFS=","}{for(i=5;i<=NF;i++){print $1,$2,$3,$4,$i}}' Input_file
This might work for you (GNU awk): awk '{print gensub(/((.*,).*),/,"\\1\n\\2",1)}' file Replace the last comma by a newline and the previous fields less the penultimate.
awk group-by on a sub-string of a column
I have the following log file: /veratt/po/dashboard.do /veratt/po/dashboardfilter.do?view=R /veratt/po/leaseagent.do?view=R /veratt/po/dashboardfilter.do?&=R&=E&propcode=0&display=0&rateType=0&floorplan=&=Display&format=4&action=getReport /veratt/po/leaseagent.do /veratt/po/leaseagent.do?view=V Desired AWK output of Count of each of the HTTP request (minus the request parameters)** /veratt/po/dashboard.do - 1 /veratt/po/leaseagent.do - 3 /veratt/po//veratt/po/dashboardfilter.do - 2 I know basic AWK command using an array - but the desired output is quite different from what I need. awk '{a[$2]=a[$2]+1;} END {for( item in a) print item , a[item];} '
awk -F\? '{ count[$1]++} END { for (item in count) printf("%s - %d\n", item, count[item]) }' logfile -F: separate fields on ? character, so $1 is the request; it there are URL parameters they are in $2, whose existence we ignore. Note: could be done using BEGIN { FS="?" }. Note: if FS is more than one character, it is treated as a regex. { count[$1]++ }: for each line, tally up the occurrence count of $1. END: run this block at the end of processing all the inputs for (item in count): iterate the item variable over the keys in the count array. printf("%s - %d\n", item, count[item]): formatted printing of the item and its count, separated by a dash with spaces. Note: %d can be replaced by %s; awk is weakly typed.
test.txt /veratt/po/dashboard.do /veratt/po/dashboardfilter.do?view=R /veratt/po/leaseagent.do?view=R /veratt/po/dashboardfilter.do?&=R&=E&propcode=0&display=0&rateType=0&floorplan=&=Display&format=4&action=getReport /veratt/po/leaseagent.do /veratt/po/leaseagent.do?view=V command: awk 'BEGIN{FS="?"} {a[$1]++} END{for(i in a) print i, a[i]}' test.txt output: /veratt/po/leaseagent.do 3 /veratt/po/dashboard.do 1 /veratt/po/dashboardfilter.do 2 explain: BEGIN{FS="?"} set ? to be the field separator, so $1 will be the substring before the first ?. This only run once before process contents of test.txt {a[$1]++} create an array, index is the substring, make it auto-increment. END{for(i in a) print i, a[i]} iterate the array, checks its index and corresponding value, the END block runs once after all lines of the test.txt processed.
formatted reading using awk
I am trying to read in a formatted file using awk. The content looks like the following: 1PS1 A1 1 11.197 5.497 7.783 1PS1 A1 1 11.189 5.846 7.700 . . . Following c format, these lines are in following format "%5d%5s%5s%5d%8.3f%.3f%8.3f" where, first 5 positions are integer (1), next 5 positions are characters (PS1), next 5 positions are characters (A1), next 5 positions are integer (1), next 24 positions are divided into 3 columns of 8 positions with 3 decimal point floating numbers. What I've been using is just calling these lines separated by columns using "$1, $2, $3". For example, cat test.gro | awk 'BEGIN{i=0} {MolID[i]=$1; id[i]=$2; num[i]=$3; x[i]=$4; y[i]=$5; z[i]=$6; i++} END { ...} >test1.gro But I ran into some problems with this, and now I am trying to read these files in a formatted way as discussed above. Any idea how I do this?
Looking at your sample input, it seems the format string is actually "%5d%-5s%5s%5d%8.3f%.3f%8.3f" with the first string field being left-justified. It's too bad awk doesn't have a scanf() function, but you can get your data with a few substr() calls awk -v OFS=: ' { a=substr($0,1,5) b=substr($0,6,5) c=substr($0,11,5) d=substr($0,16,5) e=substr($0,21,8) f=substr($0,29,8) g=substr($0,37,8) print a,b,c,d,e,f,g } ' outputs 1:PS1 : A1: 1: 11.197: 5.497: 7.783 1:PS1 : A1: 1: 11.189: 5.846: 7.700 If you have GNU awk, you can use the FIELDWIDTHS variable like this: gawk -v FIELDWIDTHS="5 5 5 5 8 8 8" -v OFS=: '{print $1, $2, $3, $4, $5, $6, $7}' also outputs 1:PS1 : A1: 1: 11.197: 5.497: 7.783 1:PS1 : A1: 1: 11.189: 5.846: 7.700
You never said exactly which fields you think should have what number, so I'd like to be clear about how awk thinks that works (Your choice to be explicit about calling the whitespace in your output format string fields makes me worry a little. You might have a different idea about this than awk.). From the manpage: An input line is normally made up of fields separated by white space, or by regular expression FS. The fields are denoted $1, $2, ..., while $0 refers to the entire line. If FS is null, the input line is split into one field per character. Take note that the whitespace in the input line does not get assigned a field number and that sequential whitespace is treated as a single field separator. You can test this with something like: echo "1 2 3 4" | awk '{print "1:" $1 "\t2:" $2 "\t3:" $3 "\t4:" $4}' at the command line. All of this assumes that you have not diddles the FS variable, of course.