I want to print the even numbers in a row but I can't.
use Terminal::ANSIColor;
# Wanna print even numbers in red
for <1 2 3 4>
{ $_ %2 == 0 ?? say color('red'),$_,color('reset') !! say $_ }
printf doesn't seem to work with the Terminal::ANSIColor directives and put doesn't work either.
Is there any switch to say which makes it print without newline? How to print those Terminal::ANSIColor formatted sections in a row?
say is basically defined as:
sub say ( +#_ ) {
for #_ {
$*OUT.print( $_.gist )
}
$*OUT.print( $*OUT.nl-out );
}
If you don't want the newline, you can either change the value of $*OUT.nl-out or use print and gist.
say $_;
print $_.gist;
In many cases the result of calling .gist is the same as .Str. Which means you don't even need to call .gist.
use Terminal::ANSIColor;
# Wanna print even numbers in red
for <1 2 3 4> {
$_ %% 2 ?? print color('red'), $_, color('reset') !! print $_
}
(Note that I used the evenly divisible by operator %%.)
say is for humans, which is why it uses .gist and adds the newline.
If you want more fine-grained control, don't use say. Use print or put instead.
Use print instead of say:
print color('red')
or
print 'red'
Related
I am writing a pretty long AWK program (NOT terminal script) to parse through a network trace file. I have a situation where the next line in the trace file is ALWAYS a certain type of 'receive' (3 possible types) - however, I only want AWK to handle/print on one type. In short, I want to tell AWK if the next line contains a certain receive type, do not include it. It is my understanding that getline is the best way to go about this.
I have tried a couple different variations of getline and getline VAR via the manual, I still cannot seem to search through and reference fields in the next line like I want. Updated from edit:
if ((event=="r") && (hopSource == hopDest)) {
getline x
if ((x $31 =="arp") || (x $35 =="AODV")) {
#printf("Badline %s %s \n", $31, $35)
}
else {
macLinkRec++;
#printf("MAC Link Recieved from HEAD - %d to MEMBER %d \n", messageSource, messageDest)
}
}
I am using the print(badline) as just a marker to see what is going on. I fully understand how to restructure the code once I get the search and reference correct. I am also able to print the correct 'next' lines. However, I would expect to be able to search through the next line and create new arguments based on what is contained in the next line. How do I search a 'next line' based on an argument in AWK? How do I reference fields in that line to create new arguments?
Final note, the 'next line' number of fields (NF) varies, but I feel that the $35 field reference should handle any problems there.
A legacy web application needs to be internationalized. Error messages are currently written inside source code in this way:
addErrorMessage("some text here");
These signs can be easily found and extracted using regex. They should be replaced with something like this:
addErrorMessage(ResourceBundle.getBundle("/Bundle", lcale).getString("key for text here"));
The correspondence between key for text here and some text here will be in a .property file.
According to some linux guru it can be achieved using awk, but I don't know anything about it. I can write a small application to do that task but it could be overkill. Are there ide plugin or existing applications for this goal ?
awk -v TextOrg='some text here' -v key='key for text here' ' "addErrorMessage(\"" TextOrg "\")" {
gsub( "addErrorMessage(\"" TextOrg "\")" \
, "addErrorMessage(ResourceBundle.getBundle(\"/Bundle\", lcale).getString(\"" key "\"))")
}
7
' YourFile
this is one way for a specific combination. Be carefful with:
assignation of value (-v ... that are constraint by shell interpretation in this case)
gsub is using regex to find, so your text need to be treated with this constraint (ex: "this f***ing text" -> "this f\*\*\*ing text" )
You certainly want to do if for several peer.
her with a file conatinaing peers
assuming that Trad.txt is a file that containt a series of 2 lines 1st, original text, second the key (to avoid some chara as separator that need complexe escape sequence interpretation if used)
ex: Trad.txt
some text
key text
other text
other key
sample code (simple, no exhaustive security, ...) Not tested, but for the concept with awk
awk '
# for first file only
FNR == NR {
# keep in memory first line as text to change
if ( NR % 2 ) TextOrg = $0
else {
# load in array the key corresponding (index is the text to change)
Key[ TextOrg] = $0
Len[ TextOrg] = length( addErrorMessage(\"" $0 "\")"
}
# don't go further in script for this line
next
}
# this point and further is reach only by second file
# if addError is found
/addErrorMessage(".*")/{
# try with each peer if there is a change (a more complex loop is more perfomant checking just necessary replacement but this one do the job)
for( TextOrg in Key) {
# to avoid regex interpretation
# Assuming for this sample code that there is 1 replace (loop is needed normaly)
# try a find of text (return the place where)
Here = index( addErrorMessage(\"" TextOrg "\")", $0)
if( Here > 0) {
# got a match, replace the substring be recreating a full one
$0 = substr( $0, 1, Here) \
"addErrorMessage(ResourceBundle.getBundle(\"/Bundle\", lcale).getString(\"" Key[ TextOrg] "\"))") \
substr( $0, Here + Len[ TextOrg])
}
}
}
# print the line in his current state (modified or not)
7
' Trad.txt YourFile
Finally, this is a workaround solution because lot of special case could occurs like "ref: function addErrorMessage(\" ...\") bla bla" will be an issue, or space inside () not treated here, or cutted line insdie (), ...
I'm just after a little help pulling in a value from a variable. I'm writing a statement to print the contents of a file to a 4 columns output on screen, colouring the 3rd column depending on what the 4th columns value is.
The file has contents as follows...
Col1=date(yymmdd)
Col2=time(hhmmss)
Col3=Jobname(test1, test2, test3, test4)
Col4=Value(null, 0, 1, 2)
Column 4 should be a value of null, 0, 1 or 2 and this is the value that will determine the colour of the 3rd column. I'm declaring the colour codes in a variable at the top of the script as follows...
declare -A colours
colours["0"]="\033[0;31m"
colours["1"]="\033[0;34m"
colours["2"]="\033[0;32m"
(note I don't have a colour for a null value, I don't know how to code this yet but I'm wanting it to be red)
My code is as follows...
cat TestScript.txt | awk '{ printf "%20s %20s %20s %10s\n", "\033[1;31m"$1,"\033[1;32m"$2,${colours[$4]}$3,"\033[1;34m"$4}'
But I get a syntax error and can't for the life of me figure a way around it no matter what I do.
Thanks for any help
Amended code below to show working solution.
I've removed the variable set originally which was done in bash, added an inline variable into the awk line...
cat TestScript.txt | awk 'BEGIN {
colours[0]="\033[0;31m"
colours[1]="\033[0;34m"
colours[2]="\033[0;32m"
}
{printf "%20s %20s %20s %10s\n","\033[1;31m"$1,"\033[1;32m"$2,colours[$4]$3,"\033[1;34m"$4}'
}
Just define the colours array in awk.
Either
BEGIN {
colours[0]="\033[0;31m"
colours[1]="\033[0;34m"
colours[2]="\033[0;32m"
}
or
BEGIN { split("\033[0;31m \033[0;34m \033[0;32m", colours) }
But in the second way, remind the first index in the array is 1, not 0.
Then, in your printf sentence the use of colours array must be changed to:
,colours[$4]$3,
But if you have defined the array using the second method, then a +1 is required:
,colours[$4+1]$3,
Best regards
In awk you can use the built-in ENVIRON hash to access the environment variables.
So instead of ${colours[$4]} (which syntax is for bash not for awk) you can write ENVIRON["something"]. Unfortunately arrays cannot accessed on this way. So instead of using colours array in environment you should use colours_1, ..., colours_2. and then you can use ENVIRON["colours_"$4].
awk seems to match all the patterns matching an expression and executes the corresponding actions. Is there a precedence that can be associated ?
For eg. In the below, lines starting with # (comments) are matched by both patterns, and both actions are executed. I want commented lines to match only the first action.
/^#.*/ {
// Action for lines starting with '#'
}
{
// Action for other lines
}
If you want to keep the code you already have mostly intact, you could just use the awk next statement. Once you encounter the next statement, awk skips processing the current record and goes to the next line.
So if you put next into the bottom your 1st block the 2nd block wouldn't be executed.
Why not simply if,else :
awk '{ if ($0 ~ /^#/)
// Action for lines starting with '#'
else
// Action for other lines
}'
Your other option is to use the pattern negation operator, '!', for the "everything else" line, if your match options are binary:
/^#.*/ {
// Action for lines starting with '#'
}
!/^#.*/ {
// Action for other lines
}
Of course, your second pattern could also simply match everything that doesn't start with a hash, i.e. /^[^#].*/
But presumably, your example is a simplification. For complex regular expressions, crafting the exact inverse match could be impossible. The negation operator just makes it explicit and foolproof.
And, as you may already know, the ".*" part is unnecessary.
I have to deal with various input files with a number of fields, arbitrarily arranged, but all consistently named and labelled with a header line. These files need to be reformatted such that all the desired fields are in a particular order, with irrelevant fields stripped and missing fields accounted for. I was hoping to use AWK to handle this, since it has done me so well when dealing with field-related dilemmata in the past.
After a bit of mucking around, I ended up with something much like the following (writing from memory, untested):
# imagine a perfectly-functional BEGIN {} block here
NR==1 {
fldname[1] = "first_name"
fldname[2] = "last_name"
fldname[3] = "middle_name"
maxflds = 3
# this is just a sample -- my real script went through forty-odd fields
for (i=1;i<=NF;i++) for (j=1;j<=maxflds;j++) if ($i == fldname[j]) fldpos[j]=i
}
NR!=1 {
for (j=1;j<=maxflds;j++) {
if (fldpos[j]) printf "%s",$fldpos[j]
printf "%s","/t"
}
print ""
}
Now this solution works fine. I run it, I get my output exactly how I want it. No complaints there. However, for anything longer than three fields or so (such as the forty-odd fields I had to work with), it's a lot of painfully redundant code which always has and always will bother me. And the thought of having to insert a field somewhere else into that mess makes me shudder.
I die a little inside each time I look at it.
I'm sure there must be a more elegant solution out there. Or, if not, perhaps there is a tool better suited for this sort of task. AWK is awesome in it's own domain, but I fear I may be stretching it's limits some with this.
Any insight?
The only suggestion that I can think of is to move the initial array setup into the BEGIN block and read the ordered field names from a separate template file in a loop. Then your awk program consists only of loops with no embedded data. Your external template file would be a simple newline-separated list.
BEGIN {while ((getline < "fieldfile") > 0) fldname[++maxflds] = $0}
You would still read the header line in the same way you are now, of course. However, it occurs to me that you could use an associative array and reduce the nested for loops to a single for loop. Something like (untested):
BEGIN {while ((getline < "fieldfile") > 0) fldname[$0] = ++maxflds}
NR==1 {
for (i=1;i<=NF;i++) fldpos[i] = fldname[$i]
}