How to rename photo files using awk, such that they are named (and hence ordered) by "date taken"? - awk

I have 3 groups of photos, from 3 different cameras (with time sychronised onboard all cameras) but with different naming schemes (e.g.: IMG_3142.jpg, DCM_022.jpg). I would like to rename every photo file with the following naming convention:
1_yyyy_mm_dd_hh_mm_ss.jpg for earliest
2_yyyy_mm_dd_hh_mm_ss.jpg for next earliest, and so on,
until we reach around 5000_yyyy_mm_dd_hh_mm_ss.jpg for the last one (i.e. the most recent)
I would like the yyyy_mm_dd_hh_mm_ss field to be replaced by the “date and time taken” value for when this photo was taken. Which is saved in the metadata/properties of each file.
I have seen awk used to carry out similar operations but I'm not familiar enough to know how to access the “time taken” metadata, etc.
Also, not that this should make a difference: my computer is a mac.

You can use jhead for this. The command is:
jhead -n%Y_%m_%d_%H_%M_%S *.jpg
Make a COPY of your files first before running it! You can install jhead with homebrew using:
brew install jhead
Or, if you don't have homebrew, download here for OS X.
That will get you the date in the filename as you wish. The sequence number is a little more difficult. Try what I am suggesting above and, if you are happy with that, we can work on the sequence number maybe. Basically, you would run jhead again to set the file modification times of your files to match the time they were shot - then the files can be made to show up in the listing in date order and we can put your sequence number on the front.
So, to get the file's date set on the computer to match the time it was taken, do:
jhead -ft *.jpg
Now all the files will be dated on your computer to match the time the photos were taken. Then we need to whizz through them in a loop with our script adding in the sequence number:
#!/bin/bash
seq=1
# List files in order, oldest first
for f in $(ls -rt *jpg)
do
# Work out new name
new="$seq_$f"
echo Rename $f as $new
# Remove "#" from start of following command if things look good so the renaming is actually done
# mv "$f" $new"
((seq++))
done
You would save that in your HOME directory as renamer, then you would go into Terminal and make the script executable like this:
chmod +x renamer
Then you need to go to where your photos are, say Desktop\Photos
cd "$HOME/Desktop/Photos"
and run the script
$HOME/renamer
That should do it.
By the way, I wonder how wise it is to use a simple sequence number at the start of your filenames because that will not make them come up in order when you look at them in Finder.
Think of file 20 i.e. 20_2015_02_03_11:45:52.jpg. Now imagine that files starting with 100-199 will be listed BEFORE file 2o, also files 1000-1999 will also be listed before file 20 - because their leading 1s come before file 20's leading 2. So, you may want to name your files:
0001_...
0002_...
0003_...
...
0019_...
0020_...
then they will come up in sequential order in Finder. If you want that, use this script instead:
#!/bin/bash
seq=1
for f in $(ls -rt *jpg)
do
# Generate new name with zero-padded sequence number
new=$(printf "%04d_$f" $seq)
echo Rename $f as $new
# Remove "#" from start of following command if things look good so the renaming is actually done
# mv "$f" $new"
((seq++))
done

Related

Notifications on next ssh login

This is a hypothetical question because I'd like to know if it's even possible before I delve in to scripting it, but is it theoretically possible to have the output of a script/process (in particular one run via cron for instance) spit out in to terminal on the next ssh login?
Some pseudocode that I hope illustrates my point:
#!/bin/bash
# Download latest example of a database (updated automatically and periodically)
wget -mirror "http://somedatabase/database_latest
# Run a command that generates an output for a set of files queried against the latest database)
for file in /some/dir/*;
do
command -output $file.txt -database database_latest
done
# Now for the bit I'm more interested in.
# If the database has been updated, the 'output.txt'
# for each file will be different.
# So, using diff...:
if [ diff $file.txt $file_old.txt == 1 ] # where file_old.txt is
# the output of the command the
# last time it ran for that file.
then
mv $file_old ./archive/ # Keep the old file but stash it in a separate dir
else
break
fi
# Make some report file from all of the outputs
cat *.txt > report.txt
So my question being, is it possible to have the script 'inform me' next time I log in to our server, if any differences were found for each file? There are a lot of files, and the 'report.txt' would become large quickly, so I only want to check it if differences are found.
How about this:
create three directories: new, cur, old
your weekly cronjob writes data to new. This script should delete everything from new before writing new data. Or else you won't be able to notice that a file is missing
cur contains the last version of the data that you looked at or consideret
old contains the previous version of the data
Each time you log on, run:
#!/bin/bash
# clear the archive
rm old/*
# move all the old files to the archive
cp cur/* old
# move all the new files to the location of the old
cp new/* cur
# show which files have changed between
diff -q cur old | tee report.txt
The diff-command will print which files are new, which are missing and which are changed. Output from diff will also be in report.txt. The cur-directory will contain all files from the last run and you can look closer at these in an editor, or you can compare them to the previous version in old. Note that if a file is missing in new, it won't be deleted from cur. The next time you log on, you will lose the contents of the old-directory. If you want to keep a history of all previous results, this should be managed by the weekly cronjob, not the login-script (you want to store a separate version each time you generate the data, not each time you log in)

How to get a list of files modified since date/revision in Accurev

I have created a workspace backed by some collaboration stream. The stream is updated regularly by team members. My goal is to take modified files in a given path and put them to another repository (do it regularly).
The question is how to create a list of files which were modified since a revision or date or ..? (I don't know which approach is the best.) The command line is preferable.
Once I get the file list I create an automating script to take the files from one place and put them to another.
accurev hist -s Your_Stream -t "2013/05/16 01:00:00"-now -a -fl
You can run accurev stat -m -fx and then parse resulting XML. element elements will have modTime attribute, which is the UNIX timestamp when the file was modified.

Finding files in subdirectories created after a certain date

I'm in the process of writing a bash script (just learning it) which needs to find files in subdirectories created after a certain date. I have a folder /images/ with jpegs in various subfolders - I want to find all jpegs uploaded to that directory (or any subdirectories) after a certain date. I know about the -mtime flag, but my "last import" date is stored in %Y-%m-%d format and it'd be nice to use that if possible?
Also, each file/pathname will then be used to generate a MySQL SELECT query. I know find generally outputs the filenames found, line-by-line. But if find isn't actually the command that I should be using, it'd be nice to have a similar output format I could use to generate the SELECT query (WHERE image.file_name IN (...))
Try below script:
DATE=<<date>>
SEARCH_PATH=/images/
DATE=`echo $DATE|sed 's/-//g'`
DATE=$DATE"0000"
FILE=~/timecheck_${RANDOM}_$(date +"%Y%m%d%H%M")
touch -t $DATE $FILE
find $SEARCH_PATH -newer $FILE 2>/dev/null|awk 'BEGIN{f=0}{if(f==1)printf("\"%s\", ",l);l=$0;f=1}END{printf("\"%s\"",l)}'
rm -f $FILE
You can convert your date into the "last X days" format that find -mtime expects.
find is the correct command for this task. Send its output somewhere, then parse the file into the query.
Beware of SQL injection attacks if the files were uploaded by users. Beware of special-character quoting even if they weren't.

Git - how do I view the change history of a method/function?

So I found the question about how to view the change history of a file, but the change history of this particular file is huge and I'm really only interested in the changes of a particular method. So would it be possible to see the change history for just that particular method?
I know this would require git to analyze the code and that the analysis would be different for different languages, but method/function declarations look very similar in most languages, so I thought maybe someone has implemented this feature.
The language I'm currently working with is Objective-C and the SCM I'm currently using is git, but I would be interested to know if this feature exists for any SCM/language.
Recent versions of git log learned a special form of the -L parameter:
-L :<funcname>:<file>
Trace the evolution of the line range given by "<start>,<end>" (or the function name regex <funcname>) within the <file>. You may not give any pathspec limiters. This is currently limited to a walk starting from a single revision, i.e., you may only give zero or one positive revision arguments. You can specify this option more than once.
...
If “:<funcname>” is given in place of <start> and <end>, it is a regular expression that denotes the range from the first funcname line that matches <funcname>, up to the next funcname line. “:<funcname>” searches from the end of the previous -L range, if any, otherwise from the start of file. “^:<funcname>” searches from the start of file.
In other words: if you ask Git to git log -L :myfunction:path/to/myfile.c, it will now happily print the change history of that function.
Using git gui blame is hard to make use of in scripts, and whilst git log -G and git log --pickaxe can each show you when the method definition appeared or disappeared, I haven't found any way to make them list all changes made to the body of your method.
However, you can use gitattributes and the textconv property to piece together a solution that does just that. Although these features were originally intended to help you work with binary files, they work just as well here.
The key is to have Git remove from the file all lines except the ones you're interested in before doing any diff operations. Then git log, git diff, etc. will see only the area you're interested in.
Here's the outline of what I do in another language; you can tweak it for your own needs.
Write a short shell script (or other program) that takes one argument -- the name of a source file -- and outputs only the interesting part of that file (or nothing if none of it is interesting). For example, you might use sed as follows:
#!/bin/sh
sed -n -e '/^int my_func(/,/^}/ p' "$1"
Define a Git textconv filter for your new script. (See the gitattributes man page for more details.) The name of the filter and the location of the command can be anything you like.
$ git config diff.my_filter.textconv /path/to/my_script
Tell Git to use that filter before calculating diffs for the file in question.
$ echo "my_file diff=my_filter" >> .gitattributes
Now, if you use -G. (note the .) to list all the commits that produce visible changes when your filter is applied, you will have exactly those commits that you're interested in. Any other options that use Git's diff routines, such as --patch, will also get this restricted view.
$ git log -G. --patch my_file
Voilà!
One useful improvement you might want to make is to have your filter script take a method name as its first argument (and the file as its second). This lets you specify a new method of interest just by calling git config, rather than having to edit your script. For example, you might say:
$ git config diff.my_filter.textconv "/path/to/my_command other_func"
Of course, the filter script can do whatever you like, take more arguments, or whatever: there's a lot of flexibility beyond what I've shown here.
The closest thing you can do is to determine the position of your function in the file (e.g. say your function i_am_buggy is at lines 241-263 of foo/bar.c), then run something to the effect of:
git log -p -L 200,300:foo/bar.c
This will open less (or an equivalent pager). Now you can type in /i_am_buggy (or your pager equivalent) and start stepping through the changes.
This might even work, depending on your code style:
git log -p -L /int i_am_buggy\(/,+30:foo/bar.c
This limits the search from the first hit of that regex (ideally your function declaration) to thirty lines after that. The end argument can also be a regexp, although detecting that with regexp's is an iffier proposition.
git log has an option '-G' could be used to find all differences.
-G Look for differences whose added or removed line matches the
given <regex>.
Just give it a proper regex of the function name you care about. For example,
$ git log --oneline -G'^int commit_tree'
40d52ff make commit_tree a library function
81b50f3 Move 'builtin-*' into a 'builtin/' subdirectory
7b9c0a6 git-commit-tree: make it usable from other builtins
The correct way is to use git log -L :function:path/to/file as explained in eckes answer.
But in addition, if your function is very long, you may want to see only the changes that various commit had introduced, not the whole function lines, included unmodified, for each commit that maybe touch only one of these lines. Like a normal diff does.
Normally git log can view differences with -p, but this not work with -L.
So you have to grep git log -L to show only involved lines and commits/files header to contextualize them. The trick here is to match only terminal colored lines, adding --color switch, with a regex. Finally:
git log -L :function:path/to/file --color | grep --color=never -E -e "^(^[\[[0-9;]*[a-zA-Z])+" -3
Note that ^[ should be actual, literal ^[. You can type them by pressing ^V^[ in bash, that is Ctrl + V, Ctrl + [. Reference here.
Also last -3 switch, allows to print 3 lines of output context, before and after each matched line. You may want to adjust it to your needs.
Show function history with git log -L :<funcname>:<file> as showed in eckes's answer and git doc
If it shows nothing, refer to Defining a custom hunk-header to add something like *.java diff=java to the .gitattributes file to support your language.
Show function history between commits with git log commit1..commit2 -L :functionName:filePath
Show overloaded function history (there may be many function with same name, but with different parameters) with git log -L :sum\(double:filepath
git blame shows you who last changed each line of the file; you can specify the lines to examine so as to avoid getting the history of lines outside your function.

What does f+++++++++ mean in rsync logs?

I'm using rsync to make a backup of my server files, and I have two questions:
In the middle of the process I need to stop and start rsync again.
Will rsync start from the point where it stopped or it will restart from the beginning?
In the log files I see "f+++++++++". What does it mean?
e.g.:
2010/12/21 08:28:37 [4537] >f.st...... iddd/logs/website-production-access_log
2010/12/21 08:29:11 [4537] >f.st...... iddd/web/website/production/shared/log/production.log
2010/12/21 08:29:14 [4537] .d..t...... iddd/web/website/production/shared/sessions/
2010/12/21 08:29:14 [4537] >f+++++++++ iddd/web/website/production/shared/sessions/ruby_sess.017a771cc19b18cd
2010/12/21 08:29:14 [4537] >f+++++++++ iddd/web/website/production/shared/sessions/ruby_sess.01eade9d317ca79a
Let's take a look at how rsync works and better understand the cryptic result lines:
1 - A huge advantage of rsync is that after an interruption the next time it continues smoothly.
The next rsync invocation will not transfer the files again, that it had already transferred, if they were not changed in the meantime. But it will start checking all the files again from the beginning to find out, as it is not aware that it had been interrupted.
2 - Each character is a code that can be translated if you read the section for -i, --itemize-changes in man rsync
Decoding your example log file from the question:
>f.st......
> - the item is received
f - it is a regular file
s - the file size is different
t - the time stamp is different
.d..t......
. - the item is not being updated (though it might have attributes
that are being modified)
d - it is a directory
t - the time stamp is different
>f+++++++++
> - the item is received
f - a regular file
+++++++++ - this is a newly created item
The relevant part of the rsync man page:
-i, --itemize-changes
Requests a simple itemized list of the changes that are being made to
each file, including attribute changes. This is exactly the same as
specifying --out-format='%i %n%L'. If you repeat the option, unchanged
files will also be output, but only if the receiving rsync is at least
version 2.6.7 (you can use -vv with older versions of rsync, but that
also turns on the output of other verbose messages).
The "%i" escape has a cryptic output that is 11 letters long. The
general format is like the string YXcstpoguax, where Y is replaced by
the type of update being done, X is replaced by the file-type, and the
other letters represent attributes that may be output if they are
being modified.
The update types that replace the Y are as follows:
A < means that a file is being transferred to the remote host (sent).
A > means that a file is being transferred to the local host (received).
A c means that a local change/creation is occurring for the item (such as the creation of a directory or the changing of a symlink,
etc.).
A h means that the item is a hard link to another item (requires --hard-links).
A . means that the item is not being updated (though it might have attributes that are being modified).
A * means that the rest of the itemized-output area contains a message (e.g. "deleting").
The file-types that replace the X are: f for a file, a d for a
directory, an L for a symlink, a D for a device, and a S for a
special file (e.g. named sockets and fifos).
The other letters in the string above are the actual letters that will
be output if the associated attribute for the item is being updated or
a "." for no change. Three exceptions to this are: (1) a newly created
item replaces each letter with a "+", (2) an identical item replaces
the dots with spaces, and (3) an unknown attribute replaces each
letter with a "?" (this can happen when talking to an older rsync).
The attribute that is associated with each letter is as follows:
A c means either that a regular file has a different checksum (requires --checksum) or that a symlink, device, or special file has a
changed value. Note that if you are sending files to an rsync prior to
3.0.1, this change flag will be present only for checksum-differing regular files.
A s means the size of a regular file is different and will be updated by the file transfer.
A t means the modification time is different and is being updated to the sender’s value (requires --times). An alternate value of T
means that the modification time will be set to the transfer time,
which happens when a file/symlink/device is updated without --times
and when a symlink is changed and the receiver can’t set its time.
(Note: when using an rsync 3.0.0 client, you might see the s flag
combined with t instead of the proper T flag for this time-setting
failure.)
A p means the permissions are different and are being updated to the sender’s value (requires --perms).
An o means the owner is different and is being updated to the sender’s value (requires --owner and super-user privileges).
A g means the group is different and is being updated to the sender’s value (requires --group and the authority to set the group).
The u slot is reserved for future use.
The a means that the ACL information changed.
The x means that the extended attribute information changed.
One other output is possible: when deleting files, the "%i" will
output the string "*deleting" for each item that is being removed
(assuming that you are talking to a recent enough rsync that it logs
deletions instead of outputting them as a verbose message).
Some time back, I needed to understand the rsync output for a script that I was writing. During the process of writing that script I googled around and came to what #mit had written above. I used that information, as well as documentation from other sources, to create my own primer on the bit flags and how to get rsync to output bit flags for all actions (it does not do this by default).
I am posting that information here in hopes that it helps others who (like me) stumble up on this page via search and need a better explanation of rsync.
With the combination of the --itemize-changes flag and the -vvv flag, rsync gives us detailed output of all file system changes that were identified in the source directory when compared to the target directory. The bit flags produced by rsync can then be decoded to determine what changed. To decode each bit's meaning, use the following table.
Explanation of each bit position and value in rsync's output:
YXcstpoguax path/to/file
|||||||||||
||||||||||╰- x: The extended attribute information changed
|||||||||╰-- a: The ACL information changed
||||||||╰--- u: The u slot is reserved for future use
|||||||╰---- g: Group is different
||||||╰----- o: Owner is different
|||||╰------ p: Permission are different
||||╰------- t: Modification time is different
|||╰-------- s: Size is different
||╰--------- c: Different checksum (for regular files), or
|| changed value (for symlinks, devices, and special files)
|╰---------- the file type:
| f: for a file,
| d: for a directory,
| L: for a symlink,
| D: for a device,
| S: for a special file (e.g. named sockets and fifos)
╰----------- the type of update being done::
<: file is being transferred to the remote host (sent)
>: file is being transferred to the local host (received)
c: local change/creation for the item, such as:
- the creation of a directory
- the changing of a symlink,
- etc.
h: the item is a hard link to another item (requires
--hard-links).
.: the item is not being updated (though it might have
attributes that are being modified)
*: means that the rest of the itemized-output area contains
a message (e.g. "deleting")
Some example output from rsync for various scenarios:
>f+++++++++ some/dir/new-file.txt
.f....og..x some/dir/existing-file-with-changed-owner-and-group.txt
.f........x some/dir/existing-file-with-changed-unnamed-attribute.txt
>f...p....x some/dir/existing-file-with-changed-permissions.txt
>f..t..g..x some/dir/existing-file-with-changed-time-and-group.txt
>f.s......x some/dir/existing-file-with-changed-size.txt
>f.st.....x some/dir/existing-file-with-changed-size-and-time-stamp.txt
cd+++++++++ some/dir/new-directory/
.d....og... some/dir/existing-directory-with-changed-owner-and-group/
.d..t...... some/dir/existing-directory-with-different-time-stamp/
Capturing rsync's output (focused on the bit flags):
In my experimentation, both the --itemize-changes flag and the -vvv flag are needed to get rsync to output an entry for all file system changes. Without the triple verbose (-vvv) flag, I was not seeing directory, link and device changes listed. It is worth experimenting with your version of rsync to make sure that it is observing and noting all that you expected.
One handy use of this technique is to add the --dry-run flag to the command and collect the change list, as determined by rsync, into a variable (without making any changes) so you can do some processing on the list yourself. Something like the following would capture the output in a variable:
file_system_changes=$(rsync --archive --acls --xattrs \
--checksum --dry-run \
--itemize-changes -vvv \
"/some/source-path/" \
"/some/destination-path/" \
| grep -E '^(\.|>|<|c|h|\*).......... .')
In the example above, the (stdout) output from rsync is redirected to grep (via stdin) so we can isolate only the lines that contain bit flags.
Processing the captured output:
The contents of the variable can then be logged for later use or immediately iterated over for items of interest. I use this exact tactic in the script I wrote during researching more about rsync. You can look at the script (https://github.com/jmmitchell/movestough) for examples of post-processing the captured output to isolate new files, duplicate files (same name, same contents), file collisions (same name, different contents), as well as the changes in subdirectory structures.
1.) It will "restart the sync", but it will not transfer files that are the same size and timestamp etc. It first builds up a list of files to transfer and during this stage it will see that it has already transferred some files and will skip them. You should tell rsync to preserve the timestamps etc. (e.g. using rsync -a ...)
While rsync is transferring a file, it will call it something like .filename.XYZABC instead of filename. Then when it has finished transferring that file it will rename it. So, if you kill rsync while it is transferring a large file, you will have to use the --partial option to continue the transfer instead of starting from scratch.
2.) I don't know what that is. Can you paste some examples?
EDIT: As per http://ubuntuforums.org/showthread.php?t=1342171 those codes are defined in the rsync man page in section for the the -i, --itemize-changes option.
Fixed part if my answer based on Joao's