How to overwrite the file while a cronjob is running?I have a command running there every 5 minutes and I want the result to be stored in a text file and it should be overwritten every 5 minutes with new result.How?
You can write a cronjob to write your file in a new file before the next cronjob.
To send content from a file to another file you can use cat file.txt > file2.txtor append with cat file.txt >> file2.txt
Related
I want to add row number for a file, then I do like this,
awk '{print $0 "\x03" NR > "/opt/data2/gds_test/test_partly.txt"}' /opt/data2/gds_test/test_partly.txt
I put this line of command in a shell script file, and run it for some time, it still does not finish, so I end it by force, but I find the source file size has changed from 1.7G to 242G,
What happened? I am a little confused,
I had ever use a small file to test in command line, this awk command seems ok,
You're reading from the front of a file at the same time as you're writing onto the end of it. Try this instead:
tmp=$(mktemp)
awk '{print $0 "\x03" NR}' '/opt/data2/gds_test/test_partly.txt' > "$tmp" &&
mv "$tmp" '/opt/data2/gds_test/test_partly.txt'
yes, i change to redirect the result to a tmp file, and then delete the original file and rename the tmp file, it is ok,
and i just also get to know that gawk -I inplace can be used,
I have two files file1 containing -
hello world
hello bangladesh
and file2 containing -
Dhaka in Bangladesh
Dhaka is capital of Bangladesh
I want to new file2 as -
hello world
hello bangladesh
Dhaka in Bangladesh
Dhaka is capital of Bangladesh
This is done by -
cat file1 file2 >> file3
mv file3 file2
But, I don't want to create new file. I guess using sed it may be possible.
Sure it's possible.
printf '%s\n' '0r file1' x | ex file2
This is a POSIX-compliant command using ex, the POSIX-specified non-visual predecessor to vi.
printf is only used here to feed a command to the editor. What printf outputs is:
0r file1
x
x is save and exit.
r is "read in the contents of the named file."
0 specifies the line number after which the read-in text should be placed.
N.B. This answer is fully copied from Is it possible to add some text at beginning of a file in CLI without making new file?
Another solution
There aren't a lot of ways to modify files in place using standard tools. Even if they appear to do so they may be using temporary files (i.e. GNU sed -i).
ex on the other hand, will do the trick:
ex +'0r file2' +wq file1
ex is a line editor and vim evolved from it so these commands may look familiar. 0r filename does the same thing as :0r filename in vim: insert the specified file after the given address (line number). The line number here is 0 which is a kind of virtual line that represents the line before line 1. So the file will be inserted before any existing text.
Then we have wq which saves and quits.
That's it.
N.B. This answer is fully copied from https://unix.stackexchange.com/a/414408/176227
You want to insert two lines at the top of an existing file.
There's no general way to do that. You can append data to the end of an existing file, but there's no way to insert data other than at the end, at least not without rewriting the portion of the file after the insertion point.
Once a file has been created, you can overwrite data within it, but the position of any unmodified data remains the same.
When you use a text editor to edit a file, it performs the updates in memory; when you save the file, it may create a new file with the same name as the old one, or it may create a temporary file and then rename it. either way, it has to write the entire content, and the original file will be clobbered (though some editors may create a backup copy).
Your method:
cat file1 file2 >> file3
mv file3 file2
is pretty much the way to do it (except that the >> should be >; you don't want to retain the previous contents of file3 if it already exists).
You can use a temporary file name:
cat file1 file2 > tmp$$ && mv tmp$$ file2
tmp$$ uses the current shell's process ID to generate a file name that's almost certain to be unique (so you don't clobber anything). The && means that the mv command won't be executed if the cat command failed, so that you'll still have the original file2 if there was an error.
I am trying the sort through mail log files using awk. My goal is to determine which emails had a delay longer than 10 seconds. The format of the log file displays the delays in delay=xxxx I have came up with:
awk '/delay/ { if($9 >=10) print}' filename
When I run this command it returns all the entries with the word delay and does not just give the delays that are greater than 10 seconds.
Please Help
Here is a sample mogfile:
$ cat mog
... the log file displays the delays in delay=11 ...
... the log file displays the delays in delay=9 ...
and the script:
$ awk '/delay/{split($9,a,"=");if(a[2]>=10)print}' mog
... the log file displays the delays in delay=11 ...
I have a file called "text.bz2" which contains a number of records which i want to process. I have a script which successfully processes all the data in a standard text file and outputs the results to a different "results.txt" file, but the command I'm currently running outputs all the results of the bz2 file to the command prompt (like cat does), creates the results.txt file - but it is empty.
This is the cammand I'm running:
bzip2 -dc text.bz2 | awk ... '
'
> results.txt
The format of the data in the decompressed bz2 file is:
field1=xxx;field2=xxx;field3=111222222;field4=xxx;field5=xxx
field1=xxx;field2=xxx;field3=111222222;field4=xxx;field5=xxx
field1=xxx;field2=xxx;field3=111222333;field4=xxx;field5=xxx
field1=xxx;field2=xxx;field3=111222444;field4=xxx;field5=xxx
field1=xxx;field2=xxx;field3=111222555;field4=xxx;field5=xxx
field1=xxx;field2=xxx;field3=111222555;field4=xxx;field5=xxx
field1=xxx;field2=xxx;field3=111222777;field4=xxx;field5=xxx
field1=xxx;field2=xxx;field3=111222888;field4=xxx;field5=xxx
and the output is exactly as expected, as below, but instead of the results being output to a text file, it's output to the command window:
111222333 111
111222444 111
111222555 111
111222777 222
111222888 111
What am i doing wrong with my bzip / redirection command?
Many thanks
Put the > file at the end of the awk command, not on the line after it:
foo | awk 'script' > file
not
foo | awk 'script'
> file
I a command which decodes binary logs to ascii format
From ASCII format file, I need to grep some patters using awk and print them
How can this be done?
What I have tried is as below in shell script and it does not work.
command > file.txt | awk /pattern/ | sed/pattern/
Also I need command to continously decode file and keep printing patterns on file being updated
Thanks in advance
command to continously decode file and keep printing patterns
The first question is exactly how continuously manifests itself. Most log files grow by being appended to -- for our purpose here, by some unknown external process -- and are periodically rotated. If you're going to continuously decode them, you're going to have to keep track of log rotation.
Can command continuously decode, or do you intend to re-run command periodically, picking up where you left off? If the latter, you might instead try some variation of:
cat log | command | awk
If that can't be done, you'll have to record where each iteration terminates, something like:
touch pos
while -f pos
do
command | awk -v status=pos script.awk > output || rm pos
done
where script.awk skips input until NR equals the value of the number in the pos file. It then processes lines until EOF, and overwrites pos with its final NR. On error, it calls exit 1, and the file is removed, and the loop terminates.
I recommend you ignore sed, and put all the pattern matching logic in one awk script. It will be easier to understand and cheaper to execute.