mysqldumpslow -s c -t 15 -v /tmp/my-slow.log >> /tmp/file_date +'%d_%m_%Y_%H_%M_%S'.log
Reading mysql slow query log from /tmp/my-slow.log
Died at /usr/bin/mysqldumpslow line 162, <> chunk 18.
Try to reduce your "top entries" ... Try 10 or 5 instead of 15 ... maybe there are not enough entries for an top 15 list.
Related
I am able to execute a select query through DB2 Command Line Processor and view the output result in notepad++.
C:\db2cmd>db2 "select * from employee fetch first 5 rows only" > output.txt
When I open output.txt in Notepad++ , then output display is as shown below, After each and every line, carriage return(CR) and Line Feed(LF) is occuring.
92881 0 13 1223
92886 0 17 1224
92890 0 20 1225
92892 0 21 1226
92896 0 24 1227
5 records.
Why the CR LF occurs while opening the output query result .txt in notepad++? How to remove the CR LF between each records. I am expecting the output as shown below,
I have a file (fixed length) in which searching for consecutive 2 lines starting with number 30 and then comparing value in position 183-187 and if both matching printing the line number. I am able to achieve the desired results up to this stage. But I would like to replace the value present in the line number with empty spaces without tampering the fixed length.
awk '/^30*/a=substr($0,183,187);getline;b=substr($0,183,187); if(a==b) print NR}' file
Explanation of the command above:
line number start with 30*
assign value to a from position between 183 to 187
get next line
assign value to b from position between 183 to 187
compare a & b - if matches it proves the value between 183 to 187 in 2 consecutive lines which starts with 30.
print line number (this is the 2nd match line number)
Above command is working as expected and printing the line number.
Example Record (just for explanation purpose hence not used fixed length)
10 ABC
20 XXX
30 XYZ
30 XYZ
30 XYZ
30 XYZ
40 YYY
10 ABC
20 XXX
30 XYZ
30 XYZ
40 YYY
With above command I could able to get line number 3 and 4 but unable to replace the 4th line output with empty spaces (inline replace) so that fixed width will not get compromised
Expected Output
10 ABC
20 XXX
30 XYZ
30
30
30
40 YYY
10 ABC
20 XXX
30 XYZ
30
40 YYY
Length of all the above lines should be 255 chars - when replace happens it has to be inline without adding it as new spaces.
Any help will be highly appreciated. Thanks.
I would use GNU AWK and treat every character as field, consider following example, let file.txt content be
10 ABC
20 XXX
30 XYZ
30 XYZ
40 YYY
then
awk 'BEGIN{FPAT=".";OFS=""}prev~/^30*/{a=substr(prev,4,3);b=substr($0,4,3);if(a==b){$4=$5=$6=" "}}{print}{prev=$0}' file.txt
output
10 ABC
20 XXX
30 XYZ
30
40 YYY
Explanation: I elected to use storing whole line in variable called prev rather than using getline, thus I do {prev=$0} as last action. I set FPAT to . indicating that any single character should be treated as field and OFS (output field separator) to empty string so no unwanted character will be added during line preparing. If prev (previous line or empty string for first line) starts with 3 I get substring with characters 4,5,6 from previous line (prev) and store it in variable a and get substring with characters 4,5,6 from current line ($0) and store it in variable b, if a and b are equal I change 4th, 5th and 6th character to space each. No matter it was changed or not I print line. Disclaimer: this assume that you want to deal with at most 2 subsequent lines having equal substring. Note /^30*/ does not check if string starts with 30 but rather if it does start with 3 e.g. it will match 312, you should probably use /^30/ instead, I elected to use your pattern unchanged as you imply it does work as intended for your data.
(tested in gawk 4.2.1)
This might work for you (GNU sed):
sed -E '/^30/{N;s/^(.{182}(.{5}).*\n.{182})\2/\1 /}' file
Match on a line beginning 30 and append the following line.
Using pattern matching, if the 5 characters from 183-187 for both lines are the same, replace the second group of 5 characters with 5 spaces.
For multiple adjacent lines use:
sed -E '/^30/{:a;N;s/^(.{182}(.{5}).*\n.{182})\2/\1 /;ta}' file
Or alternative:
sed -E ':a;$!N;s/^(30.{180}(\S{5}).*\n.{182})\2/\1 /;ta;P;D' file
It sounds like this is what you want, using any awk in any shell on every Unix box:
$ awk -v tgt=30 -v beg=11 -v end=13 '
($1==tgt) && seen[$1]++ { $0=substr($0,1,beg-1) sprintf("%*s",end-beg,"") substr($0,end+1) }
1' file
10 ABC
20 XXX
30 XYZ
30
40 YYY
Just change -v beg=11 -v end=13 to beg=183 -v end=187 for your real data.
If you're ever again tempted to use getline make sure to read awk.freeshell.org/AllAboutGetline first as it's usually the wrong approach.
output.txt
Test Results 1 PASSED with 2 minutes to process 0 issues
Test Results 2 PASSED with 10 minutes to process 0 issues
Test Results 3 FAILED ERROR 1 issues
Test Results 4 PASSED with 4 minutes to process 0 issues
Test Results 5 FAILED ERROR 3 issues
Test Results 6 PASSED with 19 minutes to process 0 issues
I need help coming up with a awk command to parse through this text. I want to list only rows that has more than 0 issue.
So in this case
Test Results 1 PASSED with 2 minutes to process 0 issues
Test Results 5 FAILED ERROR 3 issues
Try this:
$ awk '$(NF-1)' file
Test Results 3 FAILED 1 issues
Test Results 5 FAILED 3 issues
Try the following
awk '{if (($0 ~ /[0-9] issues/) && ($(NF-1) != "0")) {print $0}}' output.txt
Here the text is checked for number of issues.
It could have been done using
awk '{if ($0 ~ /[1-9] issues/) {print $0}}' output.txt
in case you are certain that there would be 1-9 issues only and not 10 or more. (regex would ignore 10, 20, 100 ... issues)
I heard that wc -l could count the number of lines in a file. However, when I use it to count lines of a file that was generated by Python, it gives a different result, miscounting one line.
Here is the MWE.
#!/usr/bin/env python
import random
def getRandomLines(in_str, num):
res = list()
lstr = len(in_str)
for i in range(num):
res.append(''.join(random.sample(in_str, lstr)))
return res
def writeRandomLines(rd_lines, fname):
lines = '\n'.join(rd_liens)
with open(fname, 'w') as fout:
fout.write(lines)
if __name__ == '__main__':
writeRandomLines(getRandomLines("foobarbazqux", 20), "example.txt")
This gives a file, example.txt, that contains 20 lines of random strings. And thus, the expection of the number of lines in example.txt is 20. However, when one applies wc -l to it, it gives 19 as the result.
$ wc -l example.txt
19 example.txt
When one uses cat -n to show the content of the file, with line number, one can see
$ cat -n example.txt
1 oaxruzaqobfb
2 ozbarboaufqx
3 fbzarbuoxoaq
4 obqfarbozaxu
5 xoqbrauboazf
6 ufqooxrababz
7 rqoxafuzboab
8 bfuaqoxaorbz
9 baxroazfouqb
10 rqzafoobxaub
11 xqaoabbufzor
12 aobxbaoruzfq
13 buozaqbrafxo
14 aobzoubfarxq
15 aquofrboazbx
16 uaoqrfobbaxz
17 bxqubarfoazo
18 aaxruzofbboq
19 xuaoarzoqfbb
20 bqouzxraobfa
Why wc -l miscount one line, and what could I do to fix this problem?
Any clues or hints will be appreciated.
In your python code, you have:
lines = '\n'.join(rd_liens)
So what you are really writing is :
word1\nword2\n...wordX-1\nwordX
Unfortunately, in man wc:
-l, --lines
print the newline counts
hence your difference.
Apparently wc -l needs to see a \n at the end of the line to count it as one. Your current format has the last line without a trailing \n, therefore not counted by wc -l. Add the newline and it should be fixed.
wc -l only counts number of new line characters.
Since you are appending lines with a '\n' characters, to join 20 lines only 19 '\n' characters were used. Hence result as 19.
If you need correct count, terminate each line with '\n'
What's the easiest way to split a file and add a header to each section?
The unix split command does everything that I need minus being able to add a header.
Any easy way to do it with existing tools before I script it up?
It is probably easiest to do this in either awk or perl. If you aren't processing much data, then using a simple shell script to post-process the output of split is probably fine. However, this will traverse the input more than once which can be a bottleneck if you are doing this for any sort of online data processing task. Something like the following should work:
bash$ cat input-file | awk '
BEGIN {
fnum = 1
print "HEADER" > fnum
}
{ if ((NR % 10) == 0) {
close(fnum)
fnum++
print "HEADER" > fnum
}
print >> fnum
}
'
bash$ wc -l input-file
239 input-file
bash$ ls
1 19 6
10 2 7
11 20 8
12 21 9
13 22 input-file
14 23
15 24
16 3
17 4
18 5
bash$