Recursive rsync over ssh, include only one file extension - ssh

I'm trying to rsync files over ssh from a server to my machine. Files are in various subdirectories, but I only want to keep the ones that match a certain pattern (IE blah.txt). I have done extensive googling and searching on stackoverflow, and I've tried just about every permutation of --include and --excludes that have been suggested. No matter what I try, rsync grabs all files.
Just as an example of one of my attempts, I have used:
rsync -avze 'ssh' --include='*blah*.txt' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
To troubleshoot, I tried this command:
rsync -avze 'ssh' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
expecting it to not copy anything, but it still grabbed all of the files.
I am using rsync version 2.6.9 on OSX.
Is there something obvious I'm missing? I've been struggling with this for quite a while.

I was able to find a solution, with a caveat. Here is the working command:
rsync -vre 'ssh' --prune-empty-dirs --include='*/' --include='*blah*.txt' --exclude='*' user#server.com:/path/to/server/files /path/to/local/files
However! If I type this into my command line directly, it works. If I save it to a file, myfile.txt, and I try `cat myfile.txt` it no longer works! This makes no sense to me.

OSX follows BSD style rsync
https://www.freebsd.org/cgi/man.cgi?query=rsync&apropos=0&sektion=0&manpath=FreeBSD+8.0-RELEASE+and+Ports&format=html
-C, --cvs-exclude
This is a useful shorthand for excluding a broad range of files
that you often don't want to transfer between systems. It uses a
similar algorithm to CVS to determine if a file should be
ignored.
The exclude list is initialized to exclude the following items
(these initial items are marked as perishable -- see the FILTER
RULES section):
RCS SCCS CVS CVS.adm RCSLOG cvslog.* tags TAGS
.make.state .nse_depinfo *~ #* .#* ,* _$* *$ *.old *.bak
*.BAK *.orig *.rej .del-* *.a *.olb *.o *.obj *.so *.exe
*.Z *.elc *.ln core .svn/ .git/ .bzr/
then, files listed in a $HOME/.cvsignore are added to the list
and any files listed in the CVSIGNORE environment variable (all
cvsignore names are delimited by whitespace).
Finally, any file is ignored if it is in the same directory as a
.cvsignore file and matches one of the patterns listed therein.
Unlike rsync's filter/exclude files, these patterns are split on
whitespace. See the cvs(1) manual for more information.
If you're combining -C with your own --filter rules, you should
note that these CVS excludes are appended at the end of your own
rules, regardless of where the -C was placed on the command-
line. This makes them a lower priority than any rules you spec-
ified explicitly. If you want to control where these CVS
excludes get inserted into your filter rules, you should omit
the -C as a command-line option and use a combination of --fil-
ter=:C and --filter=-C (either on your command-line or by
putting the ":C" and "-C" rules into a filter file with your
other rules). The first option turns on the per-directory scan-
ning for the .cvsignore file. The second option does a one-time
import of the CVS excludes mentioned above.
-f, --filter=RULE
This option allows you to add rules to selectively exclude cer-
tain files from the list of files to be transferred. This is
most useful in combination with a recursive transfer.
You may use as many --filter options on the command line as you
like to build up the list of files to exclude. If the filter
contains whitespace, be sure to quote it so that the shell gives
the rule to rsync as a single argument. The text below also
mentions that you can use an underscore to replace the space
that separates a rule from its arg.
See the FILTER RULES section for detailed information on this
option.

Related

Is there a way to move a file from one branch to another in ClearCase?

A user checked new files on the wrong branch. I would like to move them in the most efficient way there is a lot of them. My first thought is to remove the element from the branch and have the user recheck in the files on the proper branch. But I was hoping there was a way i could change the pointers?
/VOB/DIRECTORY/file##/main/1.00/1 to /VOB/DIRECTORY/file##/main/2.00/1
Whenever there are a lot of files to checkout and move, clearfsimport is a viable option.
Simply set a view to the destination branch, and import the files found in the source (and wrong) view.
See "How can I use ClearCase to “add to source control …” recursively?"
That will checkout, add, modify or remove files in the destination view in order to mirror the ones from the source (here the source is a ClearCase view, but it could actually be any folder, ClearCase view or not, where the files are).
That will be enough to "recheck in the files on the proper branch", but that won't remove the versions from the wrong branch though, and I would advice against using cleartool rmver (even though I used that here).
Perhaps a subtractive merge is better.
If you know where they are, and where you want them, you could:
1) Merge the directory and files over.
2) Use cleartool ln in a view in the destination branch to link in the files, and then merge the files individually.
If you use clearfsimport, and don't purge the added-in-the-wrong-place files, you can set yourself up for down-the-road "fun" caused by "evil twins."
Personally, since you know the files and directories that got added, where, when and by whom, you could do something like this (command lines are off-the-top-of-my-head:
Get the list of files to copy/merge
cleartool find -type d -element "created_by(baduser) && created_since(25-Jul-2016) && !created_since(26-Jul-2016)" -print > dirlist.txt
cleartool find -type fl -element "created_by(baduser) && created_since(25-Jul-2016) && !created_since(26-Jul-2016)" -print > filelist.txt
Pull the directories over by merging the parent directories while CD'd/set in a view using the destination path. Not knowing the OS involved I can't say which way you would need to parse this. If you use perl, you can grab the offset of the last instance of the directory separator and use that in substr to get the parent directory path. In the windows command prompt, you can do something like this:
SET SRCDRIVE=D:
for /f "delims==" %x in (dirlist.txt) do cleartool co -nc %~px & cleartool merge -to %~px %SRCDRIVE%~px
for /f "delims==" %x in (dirlist.txt) do cleartool co -nc %~px & cleartool merge -to %~px\%~nx %SRCDRIVE%~px\%~nx
Yes, you can do all that in a single script, and do better error checking and not trying 40x to check out the same directory.
You might also be able to merge them to the 2.0 branch (using a view selecting the 2.0 branch). To identify the elements involved, you can run a 'cleartool find' command something like this:
% cd /vobs/myvob
% cleartool find -all -version 'brtype(1.0) && created_by(user_x)' -print
The 'created_since(date-time)' query might also be useful in the compound query.
Once you're convinced you have the right set of versions, you can use '-exec' in place of the '-print' to actually perform the merge. It might look something like this:
% cleartool find -all -version 'brtype(1.0) && created_by(user_x) && created_since(29-Jun)' -exec 'cleartool merge -to $CLEARCASE_PN -version $CLEARCASE_ID_STR'
If you're happy with the results, check everything in. Then you just have to decide if you need to remove the versions on the 1.0 branch (which you can do with another 'cleartool find ... -exec ...' command).

Accurev: How to keep/promote with a multi line comment from the command line?

How to keep/promote with a multi line comment from the accurev command line?
For example if I try:
accurev stat -n -fl | xargs accurev keep -c "git log 1234..4311"
I simple get the error:
You can not use non-printable characters on the command line: # On
branch master\x0a... AccuRev was unable to understand your command.
I can of course strip out the new lines but then the comment is not really useful.
AccuRev commands that take a -c option for a comment must currently be enclosed in quotes and have no line breaks.
As for the output from git log 1234..4311 that could be captured as a manifest file and kept with the other files.
Dave
I'm not sure about doing it directly from the command-line without any extra step, and I'm hesitant to try anything on my client's AccuRev setup. That said, according to the entry on accurev keep from the CLI manual:
–c <comment>
Specify a comment for the transaction. The next command-line argument should be
a quoted string. Alternatively, the next argument can be in the form
#<comment-file>, which uses the contents of text-file <comment-file> as the
comment.
Default: enter a comment interactively, using the text editor named in
environment variable EDITOR (or a system-dependent default editor).
Reading this, I see two ways you can do what you want from the command line (meaning, not using the GUI).
1.) Pipe or cat your stat info into file, the use the #file syntax to get it into your commit
2.) Get your stat into into your clipboard, then don't give an argument to the keep command, let your editor open up, paste, save, and close.
There may be a way to get this all done via CLI without these middle-steps (perhaps you need to format the \x0a into \r\n or something?), but as I said, I'm unwilling to try it on my AccuRev setup as AccuRev gives me (and everyone else) enough trouble as it is.
HTH

How to force STORE (overwrite) to HDFS in Pig?

When developing Pig scripts that use the STORE command I have to delete the output directory for every run or the script stops and offers:
2012-06-19 19:22:49,680 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 6000: Output Location Validation Failed for: 'hdfs://[server]/user/[user]/foo/bar More info to follow:
Output directory hdfs://[server]/user/[user]/foo/bar already exists
So I'm searching for an in-Pig solution to automatically remove the directory, also one that doesn't choke if the directory is non-existent at call time.
In the Pig Latin Reference I found the shell command invoker fs. Unfortunately the Pig script breaks whenever anything produces an error. So I can't use
fs -rmr foo/bar
(i. e. remove recursively) since it breaks if the directory doesn't exist. For a moment I thought I may use
fs -test -e foo/bar
which is a test and shouldn't break or so I thought. However, Pig again interpretes test's return code on a non-existing directory as a failure code and breaks.
There is a JIRA ticket for the Pig project addressing my problem and suggesting an optional parameter OVERWRITE or FORCE_WRITE for the STORE command. Anyway, I'm using Pig 0.8.1 out of necessity and there is no such parameter.
At last I found a solution on grokbase. Since finding the solution took too long I will reproduce it here and add to it.
Suppose you want to store your output using the statement
STORE Relation INTO 'foo/bar';
Then, in order to delete the directory, you can call at the start of the script
rmf foo/bar
No ";" or quotations required since it is a shell command.
I cannot reproduce it now but at some point in time I got an error message (something about missing files) where I can only assume that rmf interfered with map/reduce. So I recommend putting the call before any relation declaration. After SETs, REGISTERs and defaults should be fine.
Example:
SET mapred.fairscheduler.pool 'inhouse';
REGISTER /usr/lib/pig/contrib/piggybank/java/piggybank.jar;
%default name 'foobar'
rmf foo/bar
Rel = LOAD 'something.tsv';
STORE Rel INTO 'foo/bar';
Once you use the fs command, there a lot of ways to do this. For an individual file, I wound up adding this to the beginning of my scripts:
-- Delete file (won't work for output, which will be a directory
-- but will work for a file that gets copied or moved during the
-- the script.)
fs -touchz top_100
rm top_100
For a directory
-- Delete dir
fs -rm -r out

Git - how do I view the change history of a method/function?

So I found the question about how to view the change history of a file, but the change history of this particular file is huge and I'm really only interested in the changes of a particular method. So would it be possible to see the change history for just that particular method?
I know this would require git to analyze the code and that the analysis would be different for different languages, but method/function declarations look very similar in most languages, so I thought maybe someone has implemented this feature.
The language I'm currently working with is Objective-C and the SCM I'm currently using is git, but I would be interested to know if this feature exists for any SCM/language.
Recent versions of git log learned a special form of the -L parameter:
-L :<funcname>:<file>
Trace the evolution of the line range given by "<start>,<end>" (or the function name regex <funcname>) within the <file>. You may not give any pathspec limiters. This is currently limited to a walk starting from a single revision, i.e., you may only give zero or one positive revision arguments. You can specify this option more than once.
...
If “:<funcname>” is given in place of <start> and <end>, it is a regular expression that denotes the range from the first funcname line that matches <funcname>, up to the next funcname line. “:<funcname>” searches from the end of the previous -L range, if any, otherwise from the start of file. “^:<funcname>” searches from the start of file.
In other words: if you ask Git to git log -L :myfunction:path/to/myfile.c, it will now happily print the change history of that function.
Using git gui blame is hard to make use of in scripts, and whilst git log -G and git log --pickaxe can each show you when the method definition appeared or disappeared, I haven't found any way to make them list all changes made to the body of your method.
However, you can use gitattributes and the textconv property to piece together a solution that does just that. Although these features were originally intended to help you work with binary files, they work just as well here.
The key is to have Git remove from the file all lines except the ones you're interested in before doing any diff operations. Then git log, git diff, etc. will see only the area you're interested in.
Here's the outline of what I do in another language; you can tweak it for your own needs.
Write a short shell script (or other program) that takes one argument -- the name of a source file -- and outputs only the interesting part of that file (or nothing if none of it is interesting). For example, you might use sed as follows:
#!/bin/sh
sed -n -e '/^int my_func(/,/^}/ p' "$1"
Define a Git textconv filter for your new script. (See the gitattributes man page for more details.) The name of the filter and the location of the command can be anything you like.
$ git config diff.my_filter.textconv /path/to/my_script
Tell Git to use that filter before calculating diffs for the file in question.
$ echo "my_file diff=my_filter" >> .gitattributes
Now, if you use -G. (note the .) to list all the commits that produce visible changes when your filter is applied, you will have exactly those commits that you're interested in. Any other options that use Git's diff routines, such as --patch, will also get this restricted view.
$ git log -G. --patch my_file
Voilà!
One useful improvement you might want to make is to have your filter script take a method name as its first argument (and the file as its second). This lets you specify a new method of interest just by calling git config, rather than having to edit your script. For example, you might say:
$ git config diff.my_filter.textconv "/path/to/my_command other_func"
Of course, the filter script can do whatever you like, take more arguments, or whatever: there's a lot of flexibility beyond what I've shown here.
The closest thing you can do is to determine the position of your function in the file (e.g. say your function i_am_buggy is at lines 241-263 of foo/bar.c), then run something to the effect of:
git log -p -L 200,300:foo/bar.c
This will open less (or an equivalent pager). Now you can type in /i_am_buggy (or your pager equivalent) and start stepping through the changes.
This might even work, depending on your code style:
git log -p -L /int i_am_buggy\(/,+30:foo/bar.c
This limits the search from the first hit of that regex (ideally your function declaration) to thirty lines after that. The end argument can also be a regexp, although detecting that with regexp's is an iffier proposition.
git log has an option '-G' could be used to find all differences.
-G Look for differences whose added or removed line matches the
given <regex>.
Just give it a proper regex of the function name you care about. For example,
$ git log --oneline -G'^int commit_tree'
40d52ff make commit_tree a library function
81b50f3 Move 'builtin-*' into a 'builtin/' subdirectory
7b9c0a6 git-commit-tree: make it usable from other builtins
The correct way is to use git log -L :function:path/to/file as explained in eckes answer.
But in addition, if your function is very long, you may want to see only the changes that various commit had introduced, not the whole function lines, included unmodified, for each commit that maybe touch only one of these lines. Like a normal diff does.
Normally git log can view differences with -p, but this not work with -L.
So you have to grep git log -L to show only involved lines and commits/files header to contextualize them. The trick here is to match only terminal colored lines, adding --color switch, with a regex. Finally:
git log -L :function:path/to/file --color | grep --color=never -E -e "^(^[\[[0-9;]*[a-zA-Z])+" -3
Note that ^[ should be actual, literal ^[. You can type them by pressing ^V^[ in bash, that is Ctrl + V, Ctrl + [. Reference here.
Also last -3 switch, allows to print 3 lines of output context, before and after each matched line. You may want to adjust it to your needs.
Show function history with git log -L :<funcname>:<file> as showed in eckes's answer and git doc
If it shows nothing, refer to Defining a custom hunk-header to add something like *.java diff=java to the .gitattributes file to support your language.
Show function history between commits with git log commit1..commit2 -L :functionName:filePath
Show overloaded function history (there may be many function with same name, but with different parameters) with git log -L :sum\(double:filepath
git blame shows you who last changed each line of the file; you can specify the lines to examine so as to avoid getting the history of lines outside your function.

What does f+++++++++ mean in rsync logs?

I'm using rsync to make a backup of my server files, and I have two questions:
In the middle of the process I need to stop and start rsync again.
Will rsync start from the point where it stopped or it will restart from the beginning?
In the log files I see "f+++++++++". What does it mean?
e.g.:
2010/12/21 08:28:37 [4537] >f.st...... iddd/logs/website-production-access_log
2010/12/21 08:29:11 [4537] >f.st...... iddd/web/website/production/shared/log/production.log
2010/12/21 08:29:14 [4537] .d..t...... iddd/web/website/production/shared/sessions/
2010/12/21 08:29:14 [4537] >f+++++++++ iddd/web/website/production/shared/sessions/ruby_sess.017a771cc19b18cd
2010/12/21 08:29:14 [4537] >f+++++++++ iddd/web/website/production/shared/sessions/ruby_sess.01eade9d317ca79a
Let's take a look at how rsync works and better understand the cryptic result lines:
1 - A huge advantage of rsync is that after an interruption the next time it continues smoothly.
The next rsync invocation will not transfer the files again, that it had already transferred, if they were not changed in the meantime. But it will start checking all the files again from the beginning to find out, as it is not aware that it had been interrupted.
2 - Each character is a code that can be translated if you read the section for -i, --itemize-changes in man rsync
Decoding your example log file from the question:
>f.st......
> - the item is received
f - it is a regular file
s - the file size is different
t - the time stamp is different
.d..t......
. - the item is not being updated (though it might have attributes
that are being modified)
d - it is a directory
t - the time stamp is different
>f+++++++++
> - the item is received
f - a regular file
+++++++++ - this is a newly created item
The relevant part of the rsync man page:
-i, --itemize-changes
Requests a simple itemized list of the changes that are being made to
each file, including attribute changes. This is exactly the same as
specifying --out-format='%i %n%L'. If you repeat the option, unchanged
files will also be output, but only if the receiving rsync is at least
version 2.6.7 (you can use -vv with older versions of rsync, but that
also turns on the output of other verbose messages).
The "%i" escape has a cryptic output that is 11 letters long. The
general format is like the string YXcstpoguax, where Y is replaced by
the type of update being done, X is replaced by the file-type, and the
other letters represent attributes that may be output if they are
being modified.
The update types that replace the Y are as follows:
A < means that a file is being transferred to the remote host (sent).
A > means that a file is being transferred to the local host (received).
A c means that a local change/creation is occurring for the item (such as the creation of a directory or the changing of a symlink,
etc.).
A h means that the item is a hard link to another item (requires --hard-links).
A . means that the item is not being updated (though it might have attributes that are being modified).
A * means that the rest of the itemized-output area contains a message (e.g. "deleting").
The file-types that replace the X are: f for a file, a d for a
directory, an L for a symlink, a D for a device, and a S for a
special file (e.g. named sockets and fifos).
The other letters in the string above are the actual letters that will
be output if the associated attribute for the item is being updated or
a "." for no change. Three exceptions to this are: (1) a newly created
item replaces each letter with a "+", (2) an identical item replaces
the dots with spaces, and (3) an unknown attribute replaces each
letter with a "?" (this can happen when talking to an older rsync).
The attribute that is associated with each letter is as follows:
A c means either that a regular file has a different checksum (requires --checksum) or that a symlink, device, or special file has a
changed value. Note that if you are sending files to an rsync prior to
3.0.1, this change flag will be present only for checksum-differing regular files.
A s means the size of a regular file is different and will be updated by the file transfer.
A t means the modification time is different and is being updated to the sender’s value (requires --times). An alternate value of T
means that the modification time will be set to the transfer time,
which happens when a file/symlink/device is updated without --times
and when a symlink is changed and the receiver can’t set its time.
(Note: when using an rsync 3.0.0 client, you might see the s flag
combined with t instead of the proper T flag for this time-setting
failure.)
A p means the permissions are different and are being updated to the sender’s value (requires --perms).
An o means the owner is different and is being updated to the sender’s value (requires --owner and super-user privileges).
A g means the group is different and is being updated to the sender’s value (requires --group and the authority to set the group).
The u slot is reserved for future use.
The a means that the ACL information changed.
The x means that the extended attribute information changed.
One other output is possible: when deleting files, the "%i" will
output the string "*deleting" for each item that is being removed
(assuming that you are talking to a recent enough rsync that it logs
deletions instead of outputting them as a verbose message).
Some time back, I needed to understand the rsync output for a script that I was writing. During the process of writing that script I googled around and came to what #mit had written above. I used that information, as well as documentation from other sources, to create my own primer on the bit flags and how to get rsync to output bit flags for all actions (it does not do this by default).
I am posting that information here in hopes that it helps others who (like me) stumble up on this page via search and need a better explanation of rsync.
With the combination of the --itemize-changes flag and the -vvv flag, rsync gives us detailed output of all file system changes that were identified in the source directory when compared to the target directory. The bit flags produced by rsync can then be decoded to determine what changed. To decode each bit's meaning, use the following table.
Explanation of each bit position and value in rsync's output:
YXcstpoguax path/to/file
|||||||||||
||||||||||╰- x: The extended attribute information changed
|||||||||╰-- a: The ACL information changed
||||||||╰--- u: The u slot is reserved for future use
|||||||╰---- g: Group is different
||||||╰----- o: Owner is different
|||||╰------ p: Permission are different
||||╰------- t: Modification time is different
|||╰-------- s: Size is different
||╰--------- c: Different checksum (for regular files), or
|| changed value (for symlinks, devices, and special files)
|╰---------- the file type:
| f: for a file,
| d: for a directory,
| L: for a symlink,
| D: for a device,
| S: for a special file (e.g. named sockets and fifos)
╰----------- the type of update being done::
<: file is being transferred to the remote host (sent)
>: file is being transferred to the local host (received)
c: local change/creation for the item, such as:
- the creation of a directory
- the changing of a symlink,
- etc.
h: the item is a hard link to another item (requires
--hard-links).
.: the item is not being updated (though it might have
attributes that are being modified)
*: means that the rest of the itemized-output area contains
a message (e.g. "deleting")
Some example output from rsync for various scenarios:
>f+++++++++ some/dir/new-file.txt
.f....og..x some/dir/existing-file-with-changed-owner-and-group.txt
.f........x some/dir/existing-file-with-changed-unnamed-attribute.txt
>f...p....x some/dir/existing-file-with-changed-permissions.txt
>f..t..g..x some/dir/existing-file-with-changed-time-and-group.txt
>f.s......x some/dir/existing-file-with-changed-size.txt
>f.st.....x some/dir/existing-file-with-changed-size-and-time-stamp.txt
cd+++++++++ some/dir/new-directory/
.d....og... some/dir/existing-directory-with-changed-owner-and-group/
.d..t...... some/dir/existing-directory-with-different-time-stamp/
Capturing rsync's output (focused on the bit flags):
In my experimentation, both the --itemize-changes flag and the -vvv flag are needed to get rsync to output an entry for all file system changes. Without the triple verbose (-vvv) flag, I was not seeing directory, link and device changes listed. It is worth experimenting with your version of rsync to make sure that it is observing and noting all that you expected.
One handy use of this technique is to add the --dry-run flag to the command and collect the change list, as determined by rsync, into a variable (without making any changes) so you can do some processing on the list yourself. Something like the following would capture the output in a variable:
file_system_changes=$(rsync --archive --acls --xattrs \
--checksum --dry-run \
--itemize-changes -vvv \
"/some/source-path/" \
"/some/destination-path/" \
| grep -E '^(\.|>|<|c|h|\*).......... .')
In the example above, the (stdout) output from rsync is redirected to grep (via stdin) so we can isolate only the lines that contain bit flags.
Processing the captured output:
The contents of the variable can then be logged for later use or immediately iterated over for items of interest. I use this exact tactic in the script I wrote during researching more about rsync. You can look at the script (https://github.com/jmmitchell/movestough) for examples of post-processing the captured output to isolate new files, duplicate files (same name, same contents), file collisions (same name, different contents), as well as the changes in subdirectory structures.
1.) It will "restart the sync", but it will not transfer files that are the same size and timestamp etc. It first builds up a list of files to transfer and during this stage it will see that it has already transferred some files and will skip them. You should tell rsync to preserve the timestamps etc. (e.g. using rsync -a ...)
While rsync is transferring a file, it will call it something like .filename.XYZABC instead of filename. Then when it has finished transferring that file it will rename it. So, if you kill rsync while it is transferring a large file, you will have to use the --partial option to continue the transfer instead of starting from scratch.
2.) I don't know what that is. Can you paste some examples?
EDIT: As per http://ubuntuforums.org/showthread.php?t=1342171 those codes are defined in the rsync man page in section for the the -i, --itemize-changes option.
Fixed part if my answer based on Joao's