How to remove a version in the main branch by bazaar? - bazaar

I have a main branch located on the server (launchpad.net). I want to remove the last version which is bad, but not sure how to achieve that. Thanks!

There are two ways to accomplish this goal, either with or without modifying history. Not modifying history is safer (and generally recommended if you're sharing the repository with other people), but creates an ugly "inverse" revision. To accomplish this, do:
bzr merge -r last:1..last:2 .
bzr commit
This will create a new revision that inverts the effects of the previous one via reverse cherrypicking.
If you're willing to modify history -- which may look cleaner, but may potentially destroy history that your collaborators rely on -- you can use overwrite the existing branch. This requires that the branch is NOT configured to only allow appends (i.e. what happens when you do bzr init --append-revisions-only).
You can then use bzr push --overwrite to replace an existing branch with your local copy.
To drop a revision, you can thus use bzr uncommit locally to get rid of the bad revision, then push the branch without the bad revision.
However, I'd advise you to be careful if this repository is shared with somebody else. Modifying history is generally dangerous and, worse, if you do accidentally remove too much history, thus losing work. It is a good idea to back up the main branch before you overwrite it.
You can use:
bzr -d <branch> append_revisions_only=True
to protect your branch against accidental overwriting. Similarly, you can use:
bzr -d <branch> append_revisions_only=False
to allow for overwriting.
In general, bzr push --overwrite is a dangerous feature that should be used with caution.
Note: I am not familiar with launchpad; launchpad may have this feature disabled by default for safety reasons and/or may use other tools to enable/disable it.

Related

In QTP, can we only use shared object repository and disable local repository?

I just started to use QTP to test a java monitor we have. I think it will be cleaner that if put all the objects in shared repository and do not use local repository at all. Is there any down side that of doing that?
No. Well, it depends.
There is no reason to disable the local repo. There is not even a way to do that! But yes, you can simply avoid using it.
I usually have a rule active in the team that reads "whatever is in the local repo should be gone upon check-in". I even have a check in the library initialization code that looks if there is stuff in the local repository and yields a warning if so.
But it depends on the AUT if this is useful, and on your workflow:
The repo that a test is actually using is the combination of the associated (shared, central) repository, "overlaid" by what is in the local repo. So you can not only add, but also modify the shared repo entries by using the local one. This is a quite powerful feature which allows you to define the "norm" case in the shared, and the exceptional cases in the local repository.
If you AUT has well-defined GUI objects which are easily re-identify-able in various contexts, fine. But if not, this feature comes handy.
I agree that this "overlaying" mechanism easily leads to "bugs" (or let´s say: playback problems) that are hard to track down. Usually, as Murphy suggests, the local repo is the last idea that comes to mind when you are diagnosing strange playback symptoms.
It is, however, quite some clickwork to open the central repo through the object repo manager, check it out, make it writeable, etc. for every small change. So the local repo is a good "buffer" for updates.
Consequently, especially if you are using QC as a central storage point, and possibly have enabled version control there, you will love the ability to add new stuff into the local repo first and move them over to the central repo in one load just before checking in the change. (By the way, other team members would kill you if you´d checkout the central repo file for more than 5 minutes, effectively putting a write-lock upon it.)

Can I optimize a Mercurial clone?

My Mercurial clone has become incredibly slow, presumably due to on-disk fragmentation. Is there a way to optimize it?
The obvious way it to make a new clone, then copy my MQ, saved bundles, hgrc, etc, to the new clone and delete the old one. But it seems like someone might have run into this problem before and made an extension to do it?
If the manifest gets particularly large then it can result in slow performance. Mercurial has an alternative repository format - generaldelta - that can often result in much smaller manifests.
You can check the size of your manifest using:
ls -lh .hg/store/*manifest*
To get maximum value from generaldelta:
Install Mercurial 2.7.2 or later (2.7.2 includes a fix to a bug in generaldelta that could result in larger manifest sizes - but there's a good chance you won't hit the bug with an earlier version).
Execute hg --config format.generaldelta=1 clone --pull orig orig.gd.
This may give some improvement in the manifest size, but not the full benefit.
Execute hg --config format.generaldelta=1 clone --pull orig.gd orig.gd.gd.
The clone of the clone may give a much greater improvement in the manifest size. This is because when pulling from a generaldelta repo things will be reordered to optimise the manifest size.
As an example of the potential benefits of generaldelta, I recently converted a repo that was ~55000 SVN commits (pulled using hgsubversion) plus ~1000 Mercurial commits/merges/grafts, etc. The manifest in the original repo was ~1.4GB. The manifest in the first clone was ~600MB. The manifest in the clone of the clone was ~30MB.
There isn't a lot of information about generaldelta online - there's still work to be done before it can become the default format, but it works well for many projects. The first few Google search results have some information from when it was first introduced, and there was some recent discussion on the mercurial-dev mailing list.
I deleted the repo and recloned, and that improved performance.
Turn off real-time anti-virus monitoring of the folder the repo is cloned to and defrag. There's not much else you can do.

Design principles as to how linux repository managers update themselves?

I know there are other applications also, but considering yum/apt-get/aptitude/pacman are you core package managers for linux distributions.
Today I saw on my fedora 13 box:
(7/7): yum-3.2.28-4.fc13_3.2.28-5.fc13.noarch.drpm | 42 kB 00:00
And I started to wonder how does such a package update itself? What design is needed to ensure a program can update itself?
Perhaps this question is too general but I felt SO was more appropriate than programmers.SE for such a question being that it is more technical in nature. If there is a more appropriate place for this question feel free to let me know and I can close or a moderator can move.
Thanks.
I've no idea how those particular systems work, but...
Modern unix systems will generally tolerate overwriting a running executable without a hiccup, so in theory you could just do it.
You could do it in a chroot jail and then move or something similar to reduce the time during which the system is vulnerable. Add a journalling filesystem and this is a little safer still.
It occurs to me that the package-manager needs to hold the package access database in memory as well to insure against a race condition there. Again, the chroot jail and copy option is available as a lower risk alternative.
And I started to wonder how does such a package update itself? What
design is needed to ensure a program can update itself?
It's like a lot of things, you don't need to "design" specifically to solve this problem ... but you do need to be aware of certain "gotchas".
For instance Unix helps by reference counting inodes so "you" can delete a file you are still using, and it's fine. However this implies a few things you have to do, for instance if you have plugins then you need to load them all before you run start a transaction ... even if the plugin would only run at the end of the transaction (because you might have a different version at the end).
There are also some things that you need to do to make sure that anything you are updating works, like: Put new files down before removing old files. And don't truncate old files, just unlink. But those also help you :).
Using external problems, which you communicate with, can be tricky (because you can't exec a new copy of the old version after it's been updated). But this isn't often done, and when it is it's for things like downloading ... which can somewhat easily be made to happen before any updates.
There are also things which aren't a concern in the cmd line clients like yum/apt, for instance if you have a program which is going to run 2+ "updates" then you can have problems if the first update was to the package manager. downgrades make this even more fun :).
Also daemon like processes should basically never "load" the package manager, but as with other gotchas ... you tend to want to follow this anyway, for other reasons.

Darcs conflicts

I installed Darcs a few days ago and have a doubt.
I am the only programmer and I usually work on two or three instances of the application, making new feautures. The problems cames because this instances modify the same source code file, so when I finished them and send to main repository they make a conflict.
Is there any way to deal with this? Can I write the same file in multiple instances without making conflict when pushing to main repository?
thanks
First of all when changes occurs at different places of the file there is generally no conflicts when merging. When two patches can be merged without conflicts one says that they commute. In your case it happens that you've modified the same part of the file in two different branches. In this case darcs don't allow you to "push" the second patch that makes the conflict.
There is two ways to resolve such a confilct, but you have to start to locally merge the both patches to get the conflict in your working repo. To do this just pull the patches from the main repository. Then you have to edit the offended file and resolve the conflict.
The first way is simple and the prefered solution, you have to "amend-record" the patch that is not yet on the main repository (look at the usage of the "darcs amend-record" command).
The other solution is to record a resolution patch, by calling "darcs record" and then pushing both the conflicting patch and the resolution patch. This solution tends to complicate the history and can make some later operations longer. However when the branch has been heavily distributed this solution becomes needed.

Mercurial practices: use with IDEs and scalability

I am not an experimented user of SCM tools, even though I am convinced of their usefulness, of course.
I used some obscure commercial tool in a former job, Perforce in the current one, and played a bit with TortoiseSVN for my little personal projects, but I disliked having lot of .svn folders all over the place, making searches, backups and such more difficult.
Then I discovered the interest of distributed SCM and I chose to go the apparently simpler (than git) Mercurial way, still for my personal, individual needs. I am in the process of learning to use it properly, having read part of the wiki and being in the middle of the excellent PDF book.
I see often repeated, for example in Mercurial working practices, "don't hesitate to use multiple trees locally. Mercurial makes this fast and light-weight." and "for each feature you work on, create a new tree.".
These are interesting and sensible advices, but they hurt a bit my little habits with centralized SCM, where we have a "holy" central repository where branches are carefully planned (and handled by administrators), changelists must be checked by (senior) peers and must not break the builds, etc. :-) Starting to work on a new branch takes quite some time...
So I have two questions in the light of above:
How practical is it to do lot of clones, in the context of IDEs and such? What if the project has configuration/settings files, makefiles or Ant scripts or shell scripts or whatever, needing path updates? (yes, probably a bad idea...) For example, in Eclipse, if I want to compile and run a clone, I have to do yet another project, tweaking the Java build path, the Run/Debug targets, and so on. Unless an Eclipse plugin ease that task. Do I miss some facility here?
How do that scale? I have read Hg is OK for large code bases, but I am perplex. At my job, we have a Java application (well, several around a big common kernel) of some 2 millions of lines, weighting some 110MB for code alone. Doing a clean compile on my old (2004) Windows workstation takes some 15 minutes to generate the 50MB of class files! I don't see myself cloning the whole project to change 3 files. So what are the practices here?
I haven't yet seen these questions addressed in my readings, so I hope this will make a useful thread.
You raise some good points!
How practical is it to do lot of clones, in the context of IDEs and such?
You're right that it can be difficult to manage many clones when IDEs and other tools depend on absolute paths. Part of it can be solved by always using relative paths in your configuration files -- making sure that a source checkout can compile from any location is a good goal in itself, no matter what revision control system you use :-)
But when you cannot or dont want to bother with several clones, then please note that a single clone can cope with multiple branches. The "hgbook" emphasizes many clones since this is a conceptually simple and very safe way of working. When you get more experience you'll see that you can use multiple heads in a single repository (perhaps by naming them with bookmarks) to do the same.
How do that scale?
Cloning a 110 MB repository should be quite fast: it depends on how long it takes to write 110 MB to your disk. In a recent message to the Mercurial mailinglist it was reported that cloning 6.3 GB took 4 minutes -- scaling that down to 110 MB gives about 4 seconds. That should be fast enough that your tea is still warm :-) Part of the trick is that the history data are simply hard-linked (yes, also on Windows) and so it is only a matter of writing out the files in the working copy.
PhiLo: I'm new at this, but mercurial also has "internal branches" that you can use within a single repository instead of cloning it.
Instead of
hg clone toto toto-bug-434
you can do
cd toto
hg branch bug-434
hg update bug-434
...
hg commit
hg update default
to create a branch and switch back and forth. Your built files not under rev control won't go away when you switch branches, some of them will just go out of date as the underlying sources are modified. Your IDE will rebuild what's needed and no more. It works much like CVS or subversion.
You should still have clean 'incoming' and 'outgoing' repositories in addition to your 'work' repository. Just that your 'work' can serve multiple purposes.
That said, you should clone your work repo before attempting anything intricate. If anything goes wrong you can throw the clone away and start over.
Question 1:
PIDA IDE has pretty good Mercurial integration. We also use Mercurial for development itself. Personally I have about 15 concurrent clones going of some projects, and the IDE copes fine. We don't have the trouble of tweaking build scripts etc, we can "clone and go".
It is so easy that in many cases I will clone to the bug number like:
hg clone http://pida.co.uk/hg pida-345
For bug #345, and I am ready to fix.
If you are having to tweak build scripts depending on the actual checkout directory of your application, I might consider that your build scripts should be using some kind of project-relative path, rather than hard-coded paths.