Find timestamp of the latest version of file - accurev

For the latest file in a stream, how can I get the timestamp associated with the version? I tried accurev hist <elem>, but that returns the entire history and now I have to parse through it.
Edit:
When I tried accurev hist -t highest <fileName>
the result is:
element: /./a/b/c/Ver_2.xlsm
eid: 461
transaction 1335; promote; 2012/10/30 16:50:01 ; user: scrubbed
version 1/3 (46/1)
So extracting just the timestamp i.e. 2012/10/30 16:50:01 from this entire result seemed more work. So, my first question in comments was if we can just get the timestamp back i.e. 2012/10/30 16:50:01
After trying the new command accurev hist -fx -t highest -p mydepot Ver_2.xlsm, the return value is an xml value :
......
<transaction
id="1335"
type="promote"
time="1351630201"
user="scrubbed">
........
This time, the timestamp is returned as epoch value. I guess I can find a way to convert this to timestamp value, but if it is possible to have a return value of 2012/10/30 16:50:01, that would work best.

Try tossing in the "-t highest" flag for your hist command, that will retrieve only the latest transaction information for the file in that stream...
This is for latest version in a specific stream:
accurev hist -fx -t highest -s stream_name .\path_to_element
This is for the latest version in the depot:
accurev hist -fx -t highest -p depot_name .\path_to_element
After your edit, I now understand that you JUST want the timestamp value. There's no way to return a single attribute. I'd suggest you use the -fx option and parse for the correct attribute. To convert epoch time to a readable value, use this:
c:>perl -e "print scalar localtime(1334932836);"
Fri Apr 20 10:40:36 2012
Hope this helps.
~James

Related

Printf formatting a variable without forking?

For my powerlevel10k custom prompt, I currently have this function to display the seconds since the epoch, comma separated. I display it under the current time so I always have a cue to remember roughly what the current epoch time is.
function prompt_epoch() {
MYEPOCH=$(/bin/date +%s | sed ':a;s/\B[0-9]\{3\}\>/,&/;ta')
p10k segment -f 66 -t ${MYEPOCH}
}
My prompt looks like this: https://imgur.com/0IT5zXi
I've been told I can do this without the forked processes using these commands:
$ zmodload -F zsh/datetime p:EPOCHSECONDS
$ printf "%'d" $EPOCHSECONDS
1,648,943,504
But I'm not sure how to do that without the forking. I know to add the zmodload line in my ~/.zshrc before my powerlevel10k is sourced, but formatting ${EPOCHSECONDS} isn't something I know how to do without a fork.
If I were doing it the way I know, this is what I'd do:
function prompt_epoch() {
MYEPOCH=$(printf "%'d" ${EPOCHSECONDS})
p10k segment -f 66 -t ${MYEPOCH}
}
But as far as I understand it, that's still forking a process every time the prompt is called, correct? Am I misunderstanding the advice given because I don't think I can see a way to get the latest epoch seconds without running some sort of process, which requires a fork, correct?
The printf zsh builtin can assign the value to a variable using the -v flag. Therefore my function can be rewritten as:
function prompt_epoch() {
printf -v MYEPOCH "%'d" ${EPOCHSECONDS}
p10k segment -f 66 -t ${MYEPOCH}
}
Thanks to this answer in Unix Stackoverflow: https://unix.stackexchange.com/a/697807/101884

Appending the datetime to the end of every line in a 600 million row file

I have a 680 million rows (19gig) file that I need the datetime appended onto every line. I get this file every night and I have to add the time that I processed it to the end of each line. I have tried many ways to do this including sed/awk and loading it into a SQL database with the last column being defaulted to the current timestamp.
I was wondering if there is a fast way to do this? My fastest way so far takes two hours and that is just not fast enough given the urgency of the information in this file. It is a flat CSV file.
edit1:
Here's what I've done so far:
awk -v date="$(date +"%Y-%m-%d %r")" '{ print $0","date}' lrn.ae.txt > testoutput.txt
Time = 117 minutes
perl -ne 'chomp; printf "%s.pdf\n", $_' EXPORT.txt > testoutput.txt
Time = 135 minutes
mysql load data local infile '/tmp/input.txt' into table testoutput
Time = 211 minutes
You don't specify if the timestamps have to be different for each of the lines. Would a "start of processing" time be enough?
If so, a simple solution is to use the paste command, with a pre-generated file of timestamps, exactly the same length as the file you're processing. Then just paste the whole thing together. Also, if the whole process is I/O bound, as others are speculating, then maybe running this on a box with an SSD drive would help speed up the process.
I just tried it locally on a 6 million row file (roughly 1% of yours), and it's actually able to do it in less than one second, on Macbook Pro, with an SSD drive.
~> date; time paste file1.txt timestamps.txt > final.txt; date
Mon Jun 5 10:57:49 MDT 2017
real 0m0.944s
user 0m0.680s
sys 0m0.222s
Mon Jun 5 10:57:49 MDT 2017
I'm going to now try a ~500 million row file, and see how that fares.
Updated:
Ok, the results are in. Paste is blazing fast compared to your solution, it took just over 90 seconds total to process the whole thing, 600M rows of simple data.
~> wc -l huge.txt
600000000 huge.txt
~> wc -l hugetimestamps.txt
600000000 hugetimestamps.txt
~> date; time paste huge.txt hugetimestamps.txt > final.txt; date
Mon Jun 5 11:09:11 MDT 2017
real 1m35.652s
user 1m8.352s
sys 0m22.643s
Mon Jun 5 11:10:47 MDT 2017
You still need to prepare the timestamps file ahead of time, but that's a trivial bash loop. I created mine in less than one minute.
A solution that simplifies mjuarez' helpful approach:
yes "$(date +"%Y-%m-%d %r")" | paste -d',' file - | head -n "$(wc -l < file)" > out-file
Note that, as with the approach in the linked answer, you must know the number of input lines in advance - here I'm using wc -l to count them, but if the number is fixed, simply use that fixed number.
yes keeps repeating its argument indefinitely, each on its own output line, until it is terminated.
paste -d',' file - pastes a corresponding pair of lines from file and stdin (-) on a single output line, separated with ,
Since yes produces "endless" output, head -n "$(wc -l < file)" ensures that processing stops once all input lines have been processed.
The use of a pipeline acts as a memory throttle, so running out of memory shouldn't be a concern.
Another alternative to test is
$ date +"%Y-%m-%d %r" > timestamp
$ join -t, -j9999 file timestamp | cut -d, -f2-
or time stamp can be generated in place as well <(date +"%Y-%m-%d %r")
join creates a cross product of the first file and second file using the non-existing field (9999), and since second file is only one line, practically appending it to the first file. Need the cut to get rid of the empty key field generated by join
If you want to add the same (current) datetime to each row in the file, you might as well leave the file as it is, and put the datetime in the filename instead. Depending on the use later, the software that processes the file could then first get the datetime from the filename.
To put the same datetime at the end of each row, some simple code could be written:
Make a string containing a separator and the datetime.
Read the lines from the file, append the above string and write back to a new file.
This way a conversion from datetime to string is only done once, and converting the file should not take much longer than copying the file on disk.

Accurev: How can I get a list of all users from a stream?

I need to get a list of all users who've contributed to a stream. I think I can just dump the entire history of the stream then parse it for the users like this (see hist for details):
accurev hist -s <stream> -a -fv
but this seems very crude, especially since I'm not interested in the history itself. Is there a more elegant way of doing this?
This works nicely:
accurev hist -p <depot> -s <stream> -a -fv | sed -n 's/.*user: \(.*\)/\1/p' | sort | uniq
You need to run the accurev hist command to obtain this information.
You can add the "-k promote" option to restrict the output to show only promote operations.
Also, you can use the -fx option to format the output in XML and create script to generate a simple list of users.

AccuRev : How to get all files changed?

I am looking to get list of files changes between a timestamp.
For example 2013/11/11 11:10:00-now.
accurev hist command given the files changed on that particular stream but it does not include the changes came from parent stream.
Is there a way to get the list of changes flew from parent streams?
Change the basis time of your child stream to the date of 2013/11/11 11:10:00. Then perform a diff by files across the child and parent stream.
Accurev 6 has added some new arguments for the diff command so the following should do the trick:
accurev diff -a -i -v MyStream -V MyStream -t "2013/11/11 11:10:00-now"
Alternatively you could try the accurev.py script, from the ac2git repo, which will return to you all the transactions that could have affected your stream. Run it like this:
python accurev.py deep-hist -p MyDepot -s MyStream -t "2013/11/11 11:10:00-now"

How to delete last row in output file generated by nzsql

I am trying to delete last row in the file generated by nzsql.Please find the below query.
nzsql -A -c "SELECT * FROM AM_MAS_DIVISION_DIM" > abc.out
When I execute this query the output will be generated and stored in abc.out.This will include both header columns as well as some time information at the bottom.But I don't need the bottom metadata and want to keep only my header columns. How can I do this using only nzsql.Please help me.Thanks in advance.
use -r flag in the nzsql command to avoid getting that row [assuming the metadata referred in question is the row count summary line, ex: (3 rows)]
-r Suppresses the row count that is displayed at the end of the SQL output.
reference: http://pic.dhe.ibm.com/infocenter/ntz/v7r0m3/index.jsp?topic=%2Fcom.ibm.nz.adm.doc%2Fr_sysadm_nzsql_command.html
Why don't you just pipe the output to a unix command to remove it? I think something like this will work:
nzsql -A -c "SELECT * FROM AM_MAS_DIVISION_DIM" | sed '$d' > abc.out
Seems to be a recommended solution for getting rid of the last line (although ed, gawk, and other tools can handle it).