Split text file into multiple files upon nth instance of delimiter [duplicate] - awk

This question already has answers here:
split file on Nth occurrence of delimiter
(3 answers)
Closed 6 years ago.
I have a large text file that has a repeating set of data with the header -XXXX- and the footer $$$$ for each entry. There are around 20k entries and I would like to separate it out into files of 500 entries each.
I've been toying around with awk and am using the command below which it close. Each file starts with -XXXX- but every file after the first has a partial entry at the end.
awk "/-XXXX-/ { delim++ } { file = sprintf(\"file%s.sdf\", int(delim / 500)); print > file; }" < big.sdf
For example:
-XXXX-
Beginning
Middle
End
$$$$
-XXXX-
Beginning
I instead want each file to end right after the $$$$.
I am using awk on Windows.

So if each set of data between -XXXX- and $$$$ is a record, you want to write 500 records at a time to separate files? It seems like you need two counters - one for the output filename that just goes up, and another for the number of records in the current "batch", which goes up to 500, but then gets reset to zero for the next batch. Something like:
BEGIN {fctr=1 ; rctr=0 ; file=("file" fctr ".sdf")}
/^\$\$\$\$$/ {print > file ; rctr+=1}
rctr==500 {fctr+=1 ; file=("file" fctr ".sdf") ; rctr=0}
!/^\$\$\$\$$/ {print > file}
Line 1 sets the initial values and starts off with file1.sdf
Line 2 matches the footer of each record, and we increment the record counter every time we see one (as well as writing out the current footer)
Line 3 is for when we reach 500 records. First move to the next filename, then reset the record count back to zero
Line 4 is for all the regular lines. Just send them to whatever is the current filename

Related

To grep contents from a CSV/Text File using Autohotkey(AHK) Script

Can anyone please help me in writing a script in AHK based on below requirement.
Requirement:
I have a CSV/TXT file in my windows environment which contains 20,000+ records in below format.
So, when I run the script it should prompt a InputBox to enter an instance name.
Example : If i enter Instance4 , it should display result in MsgBox as ServerName4
Sample Format:
ServerName1,ServerIP,Instance1,Type
ServerName2,ServerIP,Instance2,Type
ServerName3,ServerIP,Instance3,Type
ServerName4,ServerIP,Instance4,Type
ServerName5,ServerIP,Instance5,Type
.
.
.
Also as the CSV/TXT file contains large no of records , pls also consider the best way to avoid delay in fetching the results.
Please post your code, or at least show what you've already done.
You can use a Parsing Loop with CSV as the delimiter, and make a variable for each 'Instance' who's value is that of the current row's 'ServerName'.
The steps are to first FileRead the data from the file, then Loop, Parse like so:
Loop, Parse, data, CSV
{
; Parses row by row, then column by column in each row.
; A_LoopField // Current value
; A_Index // Current loop's index
; Write a script that makes a variable named with the current value of column 3, and give it the value of column 1
}
After that, you can make a Goto loop that spams InputBox and following a command that prints out the needed variable using the MsgBox command, like so:
MsgBox % %input%

print from match & process several input files

when you scrutiny my questions from the past weeks you find I asked questions similar to this one. I had problems to ask in a demanded format since I did not really know where my problems came from. E. Morton tells me not to use range expression. Well, I do not know what they are excactly. I found in this forum many questions alike mine with working answers.
Like: "How to print following line from a match" (e.g.)
But all solutions I found stop working when I process more than one input file. I need to process many.
I use this command:
gawk -f 1.awk print*.csv > new.txt
while 1.awk contains:
BEGIN { OFS=FS=";"
pattern="row4"
}
go {print} $0 ~ pattern {go = 1}
input file 1 print1.csv contains:
row1;something;in;this;row;;;;;;;
row2;something;in;this;row;;;;;;;
row3;something;in;this;row;;;;;;;
row4;don't;need;to;match;the;whole;line,;
row5;something;in;this;row;;;;;;;
row6;something;in;this;row;;;;;;;
row7;something;in;this;row;;;;;;;
row8;something;in;this;row;;;;;;;
row9;something;in;this;row;;;;;;;
row10;something;in;this;row;;;;;;;
Input file 2 print2.csv contains the same just for illustration purpose.
The 1.awk (and several others ways I found in this forum to print from match) works for one file. Output:
row5;something;in;this;row;;;;;;;
row6;something;in;this;row;;;;;;;
row7;something;in;this;row;;;;;;;
row8;something;in;this;row;;;;;;;
row9;something;in;this;row;;;;;;;
row10;something;in;this;row;;;;;;;
BUT not when I process more input files.
Each time I process this way more than one input file awk commands 'to print from match' seem to be ignored.
As said I was told not to use range expression. I do not know how and maybe the problem is linked to the way I input several files?
just reset your match indicator at the beginning of each file
$ awk 'FNR==1{p=0} p; /row4/{p=1} ' file1 file2
row5;something;in;this;row;;;;;;;
row6;something;in;this;row;;;;;;;
row7;something;in;this;row;;;;;;;
row8;something;in;this;row;;;;;;;
row9;something;in;this;row;;;;;;;
row10;something;in;this;row;;;;;;;
row5;something;in;this;row;;;;;;;
row6;something;in;this;row;;;;;;;
row7;something;in;this;row;;;;;;;
row8;something;in;this;row;;;;;;;
row9;something;in;this;row;;;;;;;
row10;something;in;this;row;;;;;;;
UPDATE
From the comments
is it possible to combine your awk with: "If $1="row5" then write in
$6="row5" and delete the value "row5" in $5? In other words, to move
content "row5" in column1, if found there, to new column 6? I could to
this with another awk but a combination into one would be nicer
... $1=="row5"{$6=$5; $5=""} ...
or, if you want to use another field instead of $5 replace $5 with the corresponding field number.

Creating a $variable from a specific part of a txt file

I'm trying to get PowerShell to use a specific section of a text file as a $variable to be used later in the script.
With Get-Content and index I can get to the point of having a whole line, but I just want one word to be the variable, not the whole thing.
The alphanumeric code will always be in the same location exactly
line 5 (counting the first one as 0 of course) and the position in would be between the characters 22 to 30 (or the last 8 characters of that line).
I would like that section of the document to be identified as $txtdoc, to be used later in:
$inputfield = $ie.Document.getElementByID('input5')
$inputfield.value = $txtdoc
The txt file contains the following
From: *************
Sent: *************
To: *******************
Subject: *************
On-Demand Tokencode: 79960739
Expires after use or 60 minutes
this maybe?
$variable = ( gc mytext.txt )[5].substring(21,8)

How to handle 3 files with awk?

Ok, so after spending 2 days, I am not able solve it and I am almost out of time now. It might be a very silly question, so please bear with me. My awk script does something like this:
BEGIN{ n=50; i=n; }
FNR==NR {
# Read file-1, which has just 1 column
ids[$1]=int(i++/n);
next
}
{
# Read file-2 which has 4 columns
# Do something
next
}
END {...}
It works fine. But now I want to extend it to read 3 files. Let's say, instead of hard-coding the value of "n", I need to read a properties file and set value of "n" from that. I found this question and have tried something like this:
BEGIN{ n=0; i=0; }
FNR==NR {
# Block A
# Try to read file-0
next
}
{
# Block B
# Read file-1, which has just 1 column
next
}
{
# Block C
# Read file-2 which has 4 columns
# Do something
next
}
END {...}
But it is not working. Block A is executed for file-0, I am able to read the property from properties files. But Block B is executed for both files file-1 and file-2. And Block C is never executed.
Can someone please help me solve this? I have never used awk before and the syntax is very confusing. Also, if someone can explain how awk reads input from different files, that will be very helpful.
Please let me know if I need to add more details to the question.
If you have gawk, just test ARGIND:
awk '
ARGIND == 1 { do file 1 stuff; next }
ARGIND == 2 { do file 2 stuff; next }
' file1 file2
If you don't have gawk, get it.
In other awks though you can just test for the file name:
awk '
FILENAME == ARGV[1] { do file 1 stuff; next }
FILENAME == ARGV[2] { do file 2 stuff; next }
' file1 file2
That only fails if you want to parse the same file twice, if that's the case you need to add a count of the number of times that file's been opened.
Update: The solution below works, as long as all input files are nonempty, but see #Ed Morton's answer for a simpler and more robust way of adding file-specific handling.
However, this answer still provides a hopefully helpful explanation of some awk basics and why the OP's approach didn't work.
Try the following (note that I've made the indices 1-based, as that's how awk does it):
awk '
# Increment the current-file index, if a new file is being processed.
FNR == 1 { ++fIndex }
# Process current line if from 1st file.
fIndex == 1 {
print "file 1: " FILENAME
next
}
# Process current line if from 2nd file.
fIndex == 2 {
print "file 2: " FILENAME
next
}
# Process current line (from all remaining files).
{
print "file " fIndex ": " FILENAME
}
' file-1 file-2 file-3
Pattern FNR==1 is true whenever a new input file is starting to get processed (FNR contains the input file-relative line number).
Every time a new file starts processing, fIndexis incremented and thus reflects the 1-based index of the current input file. Tip of the hat to #twalberg's helpful answer.
Note that an uninitialized awk variable used in a numeric context defaults to 0, so there's no need to initialize fIndex (unless you want a different start value).
Patterns such as fIndex == 1 can then be used to execute blocks for lines from a specific input file only (assuming the block ends in next).
The last block is then executed for all input files that don't have file-specific blocks (above).
As for why your approach didn't work:
Your 2nd and 3rd blocks are potentially executed unconditionally, for lines from all input files, because they are not preceded by a pattern (condition).
So your 2nd block is entered for lines from all subsequent input files, and its next statement then prevents the 3rd block from ever getting reached.
Potential misconceptions:
Perhaps you think that each block functions as a loop processing a single input file. This is NOT how awk works. Instead, the entire awk program is processed in a loop, with each iteration processing a single input line, starting with all lines from file 1, then from file 2, ...
An awk program can have any number of blocks (typically preceded by patterns), and whether they're executed for the current input line is solely governed by whether the pattern evaluates to true; if there is no pattern, the block is executed unconditionally (across input files). However, as you've already discovered, next inside a block can be used to skip subsequent blocks (pattern-block pairs).
Perhaps you need to consider adding some additional structure like this:
BEGIN { file_number=1 }
FNR==1 { ++file_number }
file_number==3 && /something_else/ { ...}

How can I remove lines from a file with more than a certain number of entries

I've looked at the similar question about removing lines with more than a certain number of characters and my problem is similar but a bit trickier. I have a file that is generated after analyzing some data and each line is supposed to contain 29 numbers. For example:
53.0399 0.203827 7.28285 0.0139936 129.537 0.313907 11.3814 0.0137903 355.008 \
0.160464 12.2717 0.120802 55.7404 0.0875189 11.3311 0.0841887 536.66 0.256761 \
19.4495 0.197625 46.4401 2.38957 15.8914 17.1149 240.192 0.270649 19.348 0.230\
402 23001028 23800855
53.4843 0.198886 7.31329 0.0135975 129.215 0.335697 11.3673 0.014766 355.091 0\
.155786 11.9938 0.118147 55.567 0.368255 11.449 0.0842612 536.91 0.251735 18.9\
639 0.184361 47.2451 0.119655 18.6589 0.592563 240.477 0.298805 20.7409 0.2548\
56 23001585
50.7302 0.226066 7.12251 0.0158698 237.335 1.83226 15.4057 0.059467 -164.075 5\
.14639 146.619 1.37761 55.6474 0.289037 11.4864 0.0857042 536.34 0.252356 19.3\
91 0.198221 46.7011 0.139855 20.1464 0.668163 240.664 0.284125 20.3799 0.24696\
23002153
But every once in a while, a line like the first one appears that has an extra 8 digit number at the end from analyzing an empty file (so it just returns the file ID number but not on a new line like it should). So I just want to find lines that have this extra 30th number and remove just that 30th entry. I figure I could do this with awk but since I have little experience with it I'm not sure how. So if anyone can help I'd appreciate it.
Thanks
Summary: Want to find lines in a text file with an extra entry in a row and remove the last extra entry so all rows have same number of entries.
With awk, you tell it how many fields there are per record. The extras are ignored
awk '{NF = 29; print}' filename
If you want to save that back to the file, you have to do a little extra work
awk '{NF = 29; print}' filename > filename.new && mv filename.new filename