Is there a way to move a file from one branch to another in ClearCase? - branch

A user checked new files on the wrong branch. I would like to move them in the most efficient way there is a lot of them. My first thought is to remove the element from the branch and have the user recheck in the files on the proper branch. But I was hoping there was a way i could change the pointers?
/VOB/DIRECTORY/file##/main/1.00/1 to /VOB/DIRECTORY/file##/main/2.00/1

Whenever there are a lot of files to checkout and move, clearfsimport is a viable option.
Simply set a view to the destination branch, and import the files found in the source (and wrong) view.
See "How can I use ClearCase to “add to source control …” recursively?"
That will checkout, add, modify or remove files in the destination view in order to mirror the ones from the source (here the source is a ClearCase view, but it could actually be any folder, ClearCase view or not, where the files are).
That will be enough to "recheck in the files on the proper branch", but that won't remove the versions from the wrong branch though, and I would advice against using cleartool rmver (even though I used that here).
Perhaps a subtractive merge is better.

If you know where they are, and where you want them, you could:
1) Merge the directory and files over.
2) Use cleartool ln in a view in the destination branch to link in the files, and then merge the files individually.
If you use clearfsimport, and don't purge the added-in-the-wrong-place files, you can set yourself up for down-the-road "fun" caused by "evil twins."
Personally, since you know the files and directories that got added, where, when and by whom, you could do something like this (command lines are off-the-top-of-my-head:
Get the list of files to copy/merge
cleartool find -type d -element "created_by(baduser) && created_since(25-Jul-2016) && !created_since(26-Jul-2016)" -print > dirlist.txt
cleartool find -type fl -element "created_by(baduser) && created_since(25-Jul-2016) && !created_since(26-Jul-2016)" -print > filelist.txt
Pull the directories over by merging the parent directories while CD'd/set in a view using the destination path. Not knowing the OS involved I can't say which way you would need to parse this. If you use perl, you can grab the offset of the last instance of the directory separator and use that in substr to get the parent directory path. In the windows command prompt, you can do something like this:
SET SRCDRIVE=D:
for /f "delims==" %x in (dirlist.txt) do cleartool co -nc %~px & cleartool merge -to %~px %SRCDRIVE%~px
for /f "delims==" %x in (dirlist.txt) do cleartool co -nc %~px & cleartool merge -to %~px\%~nx %SRCDRIVE%~px\%~nx
Yes, you can do all that in a single script, and do better error checking and not trying 40x to check out the same directory.

You might also be able to merge them to the 2.0 branch (using a view selecting the 2.0 branch). To identify the elements involved, you can run a 'cleartool find' command something like this:
% cd /vobs/myvob
% cleartool find -all -version 'brtype(1.0) && created_by(user_x)' -print
The 'created_since(date-time)' query might also be useful in the compound query.
Once you're convinced you have the right set of versions, you can use '-exec' in place of the '-print' to actually perform the merge. It might look something like this:
% cleartool find -all -version 'brtype(1.0) && created_by(user_x) && created_since(29-Jun)' -exec 'cleartool merge -to $CLEARCASE_PN -version $CLEARCASE_ID_STR'
If you're happy with the results, check everything in. Then you just have to decide if you need to remove the versions on the 1.0 branch (which you can do with another 'cleartool find ... -exec ...' command).

Related

ClearCase: How do I find which version I branched off from?

For a given file, say I branched off from /main/2 and do my development in that branch newBranch. cleartool diff -pred would compare my loaded version (/main/newBranch/LATEST) to /main/2. Which clearcase command would I pass either the file name or /main/newBranch and would return /main/2?
I'm just trying to find which version -pred selects, but I can't find out how anywhere!
FOLLOW UP:
Say I checked the file in and now I'm in version /main/newBranch/3. How can I still compare it to where it was branched off from (/main/2)?
Which ClearCase command would I pass either the file name or /main/newBranch and would return /main/2?
You can try using cleartool lsvtree, which will list all versions of a file.
cleartool lsvtree myFile | grep main | head 1
As noted, cleartool describe is easier.
How can I still compare it to where it was branched off from (/main/2)?
You can use the /main/newBranch/0, 0 being the placeholder version created for each new branch, here identical to /main/2, using cleartool diff:
cleartool diff -pred yourFile yourFile##/main/newBranch/0
For my purposes, cleartool describe -short -pred <version0-of-child-branch> did the trick. For my example, this would be cleartool describe -short -pred /main/newBranch/0.

Recursive rsync over ssh, include only one file extension

I'm trying to rsync files over ssh from a server to my machine. Files are in various subdirectories, but I only want to keep the ones that match a certain pattern (IE blah.txt). I have done extensive googling and searching on stackoverflow, and I've tried just about every permutation of --include and --excludes that have been suggested. No matter what I try, rsync grabs all files.
Just as an example of one of my attempts, I have used:
rsync -avze 'ssh' --include='*blah*.txt' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
To troubleshoot, I tried this command:
rsync -avze 'ssh' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
expecting it to not copy anything, but it still grabbed all of the files.
I am using rsync version 2.6.9 on OSX.
Is there something obvious I'm missing? I've been struggling with this for quite a while.
I was able to find a solution, with a caveat. Here is the working command:
rsync -vre 'ssh' --prune-empty-dirs --include='*/' --include='*blah*.txt' --exclude='*' user#server.com:/path/to/server/files /path/to/local/files
However! If I type this into my command line directly, it works. If I save it to a file, myfile.txt, and I try `cat myfile.txt` it no longer works! This makes no sense to me.
OSX follows BSD style rsync
https://www.freebsd.org/cgi/man.cgi?query=rsync&apropos=0&sektion=0&manpath=FreeBSD+8.0-RELEASE+and+Ports&format=html
-C, --cvs-exclude
This is a useful shorthand for excluding a broad range of files
that you often don't want to transfer between systems. It uses a
similar algorithm to CVS to determine if a file should be
ignored.
The exclude list is initialized to exclude the following items
(these initial items are marked as perishable -- see the FILTER
RULES section):
RCS SCCS CVS CVS.adm RCSLOG cvslog.* tags TAGS
.make.state .nse_depinfo *~ #* .#* ,* _$* *$ *.old *.bak
*.BAK *.orig *.rej .del-* *.a *.olb *.o *.obj *.so *.exe
*.Z *.elc *.ln core .svn/ .git/ .bzr/
then, files listed in a $HOME/.cvsignore are added to the list
and any files listed in the CVSIGNORE environment variable (all
cvsignore names are delimited by whitespace).
Finally, any file is ignored if it is in the same directory as a
.cvsignore file and matches one of the patterns listed therein.
Unlike rsync's filter/exclude files, these patterns are split on
whitespace. See the cvs(1) manual for more information.
If you're combining -C with your own --filter rules, you should
note that these CVS excludes are appended at the end of your own
rules, regardless of where the -C was placed on the command-
line. This makes them a lower priority than any rules you spec-
ified explicitly. If you want to control where these CVS
excludes get inserted into your filter rules, you should omit
the -C as a command-line option and use a combination of --fil-
ter=:C and --filter=-C (either on your command-line or by
putting the ":C" and "-C" rules into a filter file with your
other rules). The first option turns on the per-directory scan-
ning for the .cvsignore file. The second option does a one-time
import of the CVS excludes mentioned above.
-f, --filter=RULE
This option allows you to add rules to selectively exclude cer-
tain files from the list of files to be transferred. This is
most useful in combination with a recursive transfer.
You may use as many --filter options on the command line as you
like to build up the list of files to exclude. If the filter
contains whitespace, be sure to quote it so that the shell gives
the rule to rsync as a single argument. The text below also
mentions that you can use an underscore to replace the space
that separates a rule from its arg.
See the FILTER RULES section for detailed information on this
option.

Git - how do I view the change history of a method/function?

So I found the question about how to view the change history of a file, but the change history of this particular file is huge and I'm really only interested in the changes of a particular method. So would it be possible to see the change history for just that particular method?
I know this would require git to analyze the code and that the analysis would be different for different languages, but method/function declarations look very similar in most languages, so I thought maybe someone has implemented this feature.
The language I'm currently working with is Objective-C and the SCM I'm currently using is git, but I would be interested to know if this feature exists for any SCM/language.
Recent versions of git log learned a special form of the -L parameter:
-L :<funcname>:<file>
Trace the evolution of the line range given by "<start>,<end>" (or the function name regex <funcname>) within the <file>. You may not give any pathspec limiters. This is currently limited to a walk starting from a single revision, i.e., you may only give zero or one positive revision arguments. You can specify this option more than once.
...
If “:<funcname>” is given in place of <start> and <end>, it is a regular expression that denotes the range from the first funcname line that matches <funcname>, up to the next funcname line. “:<funcname>” searches from the end of the previous -L range, if any, otherwise from the start of file. “^:<funcname>” searches from the start of file.
In other words: if you ask Git to git log -L :myfunction:path/to/myfile.c, it will now happily print the change history of that function.
Using git gui blame is hard to make use of in scripts, and whilst git log -G and git log --pickaxe can each show you when the method definition appeared or disappeared, I haven't found any way to make them list all changes made to the body of your method.
However, you can use gitattributes and the textconv property to piece together a solution that does just that. Although these features were originally intended to help you work with binary files, they work just as well here.
The key is to have Git remove from the file all lines except the ones you're interested in before doing any diff operations. Then git log, git diff, etc. will see only the area you're interested in.
Here's the outline of what I do in another language; you can tweak it for your own needs.
Write a short shell script (or other program) that takes one argument -- the name of a source file -- and outputs only the interesting part of that file (or nothing if none of it is interesting). For example, you might use sed as follows:
#!/bin/sh
sed -n -e '/^int my_func(/,/^}/ p' "$1"
Define a Git textconv filter for your new script. (See the gitattributes man page for more details.) The name of the filter and the location of the command can be anything you like.
$ git config diff.my_filter.textconv /path/to/my_script
Tell Git to use that filter before calculating diffs for the file in question.
$ echo "my_file diff=my_filter" >> .gitattributes
Now, if you use -G. (note the .) to list all the commits that produce visible changes when your filter is applied, you will have exactly those commits that you're interested in. Any other options that use Git's diff routines, such as --patch, will also get this restricted view.
$ git log -G. --patch my_file
Voilà!
One useful improvement you might want to make is to have your filter script take a method name as its first argument (and the file as its second). This lets you specify a new method of interest just by calling git config, rather than having to edit your script. For example, you might say:
$ git config diff.my_filter.textconv "/path/to/my_command other_func"
Of course, the filter script can do whatever you like, take more arguments, or whatever: there's a lot of flexibility beyond what I've shown here.
The closest thing you can do is to determine the position of your function in the file (e.g. say your function i_am_buggy is at lines 241-263 of foo/bar.c), then run something to the effect of:
git log -p -L 200,300:foo/bar.c
This will open less (or an equivalent pager). Now you can type in /i_am_buggy (or your pager equivalent) and start stepping through the changes.
This might even work, depending on your code style:
git log -p -L /int i_am_buggy\(/,+30:foo/bar.c
This limits the search from the first hit of that regex (ideally your function declaration) to thirty lines after that. The end argument can also be a regexp, although detecting that with regexp's is an iffier proposition.
git log has an option '-G' could be used to find all differences.
-G Look for differences whose added or removed line matches the
given <regex>.
Just give it a proper regex of the function name you care about. For example,
$ git log --oneline -G'^int commit_tree'
40d52ff make commit_tree a library function
81b50f3 Move 'builtin-*' into a 'builtin/' subdirectory
7b9c0a6 git-commit-tree: make it usable from other builtins
The correct way is to use git log -L :function:path/to/file as explained in eckes answer.
But in addition, if your function is very long, you may want to see only the changes that various commit had introduced, not the whole function lines, included unmodified, for each commit that maybe touch only one of these lines. Like a normal diff does.
Normally git log can view differences with -p, but this not work with -L.
So you have to grep git log -L to show only involved lines and commits/files header to contextualize them. The trick here is to match only terminal colored lines, adding --color switch, with a regex. Finally:
git log -L :function:path/to/file --color | grep --color=never -E -e "^(^[\[[0-9;]*[a-zA-Z])+" -3
Note that ^[ should be actual, literal ^[. You can type them by pressing ^V^[ in bash, that is Ctrl + V, Ctrl + [. Reference here.
Also last -3 switch, allows to print 3 lines of output context, before and after each matched line. You may want to adjust it to your needs.
Show function history with git log -L :<funcname>:<file> as showed in eckes's answer and git doc
If it shows nothing, refer to Defining a custom hunk-header to add something like *.java diff=java to the .gitattributes file to support your language.
Show function history between commits with git log commit1..commit2 -L :functionName:filePath
Show overloaded function history (there may be many function with same name, but with different parameters) with git log -L :sum\(double:filepath
git blame shows you who last changed each line of the file; you can specify the lines to examine so as to avoid getting the history of lines outside your function.

Remove unknown files in Bazaar

I have a bunch of unknown files in my Bazaar working tree that I no longer want. I can get a list of them using bzr stat, but I'd like an easy way to get rid of them. (I'd expect an option for bzr revert to do this, but I'm not finding one.)
I can always write a tiny script to parse the output of bzr stat and rm or mv the unknowns, but I thought something might already exist.
I have Bazaar (bzr) 1.13.1.
bzr clean-tree will get rid of all unknown files in a working tree. It also has switches to remove ignored files, merges backups and other types of unwanted files. See bzr clean-tree --usage for full details.
Edit to add: This is true for Bazaar 2.0.0, I'm not sure about 1.13
Made a script:
#!/usr/bin/env ruby
# Move unknown files in a Bazaar repository to the trash.
#
# Author: Benjamin Oakes
require 'fileutils'
TRASH_DIRECTORY = File.expand_path('~/.Trash/')
stdout = %x(bzr stat)
within = false
stdout.each_line do |line|
if line.match(/^unknown:$/)
within = true
next
elsif line.match(/^[a-z]+:$/i)
within = false
next
end
if within
FileUtils.move(line.match(/^\s+(.*?)$/)[1], TRASH_DIRECTORY)
end
end
I've only tested it a little, but it seems to work just fine. Please let me know if you find an issue via the comments.
On a separate topic, should I learn sed & awk? I tend to write these things using ruby -e "some ruby code".

Using xcopy to copy files from several directories to one directory

Is it possible to use xcopy to copy files from several directories into one directory using only one xcopy command?
Assuming that I have the directory tree
root\Source\Sub1\Sub2
I want to copy all .xml files from the directory root\Source including sub folder to root\Destination. I don't want to copy the folder structure, just the files.
As DandDI said, you don't need xcopy. for statement helps much. However, you don't need to state process outcome of dir command as well, this command helps better
for /R c:\source %f in (*.xml) do copy "%f" x:\destination\
By the way, when you use it from a batch file, you need to add spare % in front of variable %f hence your command line should be;
for /R c:\source %%f in (*.xml) do copy %%f x:\destination\
when you use it within a batch
Should surround %f with double quotes otherwise it will fail copying file names with spaces
You don't need xcopy for that.
You can get a listing of all the files you want and perform the copy that way.
For example in windows xp command prompt:
for /f "delims==" %k in ('dir c:\source\*.xml /s /b') do copy "%k" x:\destination\
The /s goes into all subdirectories and the /b lists only the files name and path. Each file inturn is assigned to the %k variable, then the copy command copies the file to the destination. The only trick is making sure the destination is not part of the source.
The Answer to this problem which I think is "How to gather all your files out of all the little subdirectories into one single directory" is to download a piece of software called XXCOPY. This is freely available via XXCOPY.COM and there's a free non-commercial version fortunately. One of the Frequently Asked Questions on the help facility on XXCOPY.COM is effectively "How do I gather all my files into one directory" and it tells you which switch to use. XXCOPY is though a surefire way of doing this and it comes in a .zip archive so unzipping it can be not that straightforward but it's not particularly difficult either. There is an unzipping program called ZipGenius available through the ZipGenius.it website so maybe before you download XXCOPY then download ZipGenius then it's a smallpart smalltime double wammy(!)
Might not be the exact answer but if anyone would like to do this without coding.
You can search the name of the item inside a specific folder, and then you can copy the results and later paste it into your desired folder. It will rename the same file to be the folder I believe as the prefix and then the repeated name.