How do I get Bzr to run an arbitrary command on commit - bazaar

I need the ability to run an arbitrary command when I try and commit to a bazaar branch.
This command should return 0 on success, or any other code on failure, and if the command fails, bzr should refuse to commit.
I want to do this for running test suites mainly, however, there are also other things (for example, checking whether there is a freeze on the branch that is trying to be committed, etc etc) that I'd like to be able to do

You need to write a pre_commit hook, check for example:
http://schettino72.wordpress.com/category/testing/

Related

Git: merge two working copies without committing

I have a clone for server development and another for client development. Both material will eventually make it into the same branch, but I want to synchronize them and I want it to perform a merge as though I had commit pushed and pulled, but I want to do it without that.
I'm able to make a patch with this script I wrote:
git diff --cached
git diff
on the server, but applying that to the client is much harder.
I've tried the Unix patch command, for some reason, it keeps asking me what files to patch, like I can't find them. (Yes, they're there) I've tried
git apply -3 patch.patch
but that gives a lot of errors like "with conflicts" (without making any changes) and "does not match index". It doesn't even seem to be trying to patch the other half of the files.
Stashing, then applying the patch, and then popping from the stash doesn't work, because unstashing refuses to do merges.
It looks like doing it without the pulling isn't going to work--I haven't found a way to do it conveniently and safely. However, my problem with committing is that I didn't want to spam the git log with garbage like:
Sync'ing to client
Sync'ing back to server
Oops! Sync'ing something I forgot to the server again!
etc.
But I can avoid all this by committing, then pulling from the remote repos. In the end, I wouldn't have to push those commits, since I would use reset to remove them all from the local repo and then, with all my changes in the working directory, do a proper commit and push it.
Gotachas
They are many.
It's commonly known that you shouldn't reset your local repo if something has already pulled from it. This is probably from the obvious confusion that results when one repo delete commits that another repo believes were there. For that reason it's important that the same reset is performed on both repos before they start sharing code again.
If after you've done your commits that you later want to reset, then pull/merge, you could make things very difficult for yourself. There should be a way to manage it, but I haven't yet figured it out. One idea is to reset, stash, pull, merge, and commit again. Another involves revert with the -n option.
Instructions
The following example assumes you have 2 clones; one called "client" and the other "server".
Following https://help.github.com/articles/adding-a-remote, setup your client's and server's repo on each others' systems to they can pull from each other.
When you want to sync, just commit on the donor system, then instead of pulling from the origin, pull from a remote. Say the client wanted a commit from the server. On the client:: git pull myserver-repo mybranch.
Merge and conflict-resolve as necessary.
Loop back to 2 as many times as is necessary.
After several iterations of 2-4, you arrive at the point when you are ready to push your changes to the server. Go to whichever local repo has all the changes you want pushed, then run git log. Find the commit before the first commit you did in 2. Copy its hash to the clipboard.
Then git reset: git reset <hash you copied in 5>.
You should then see all the commits you don't want disappear from the log and all the changes therein in your working directory. Commit and push.
It's important that you do a cleanup on the repo from which you didn't perform 5-7. So if you pushed from your server repo, you need to perform the same reset operation on your client, then dispense with the changes as you see fit. My preferred method is git stash save "delete_me".

Fully delete abandoned commit from Gerrit DB AND 'query'

I am trying to fully purge a change from Gerrit and running into some problems.
Previously I tried to follow this guide to achieve my goal:
https://www.onyxpoint.com/deleting-abandoned-commits-from-gerrit-code-review/
I messed this up however, and somehow managed to do the following:
Purge the offending change-id from all the tables in the Gerrit gsql database
The change still appears in the web-interface, but if I click on it, it fires an error: "The page you requested was not found, or you do not have permission to view this page."
If I run 'gerrit query' for the change, it still shows up, replete with all information.
Where is the change information coming from if it is not in the DB??? I also tried flushing all caches. Is it somewhere in the search index for lucene or something?
This is not super important, but it is really driving me nuts!
In my case, I didn't have to perform a re-index. But an additional step(5):
Open the GSQL interface
$ gerrit-cli gsql
The following command will mark the change set as a draft change set in the Gerrit database.
gerrit> update changes set status='d' where change_id='64581';
Next, update the associated patch sets.
gerrit> update patch_sets set draft='Y' where change_id='64581';
gerrit> \q
Prior to making further changes, you need to make sure that the Gerrit caches have been flushed. If you don’t do this, you may end up with strange
results when using the Web UI in relation to this change set.
$ gerrit-cli flush-caches --cache changes
Ensure that the administrator has "View Drafts" and "Delete Drafts" permission on the repo for refs/*
Finally, delete the patch set(s) that you had previously abandoned.
In this case, we’re going to assume that you have two patch sets to
delete.
$ gerrit-cli review 64581,1 --delete
$ gerrit-cli review 64581,2 --delete
Deleting each patch-set one by one can be a PITA. Nuke 'em in one go using the GUI:
Queries use Gerrit's secondary index (by default Lucene-based) so if you modify the database outside of Gerrit you have to reindex the data with the reindex command:
$ java -jar path/to/gerrit.war reindex -d path/to/gerrit-site-dir
This command should only be executed when Gerrit isn't running.

Find commit that broke a test without running every test on every commit

I am trying to find a tool that can solve the following problem:
Our entire test suite takes hours to runs which often makes it difficult or at least very time consuming to find out which commit broke a specific test since there may be 50 to 200 commits in between test runs. At any given time there are only very few broken tests, so rerunning only the broken tests is very fast compared to running the entire test suite.
Is there a tool, e.g. a continuous integration server, that can rerun failed tests with a couple of revisions in between the last revision where the test was OK and the first revision where the test was not OK and therefore automatically find out on which specific commit a test switched from successful to broken.
For example:
Test A and B are ok in revision 100. Test A and B are broken in revision 200.
The tool should now run both tests with revision 150.
Then if e.g. test A was broken and test B was OK in revision 150, it could continue to check test A with revision 125 and test B with revision 175 and so on until every broken test can be explained by some specific commit.
For a single test, I could probably hack something together with git bisect. But for multiple failed test this is probably not sufficient, since we need to search in both directions for many revisions.
git
Are you using git or mercurial (see: What is Mercurial bisect good for?)?
Suppose version 2.6.18 of your project worked, but the version at "master" crashes. Sometimes the best way to find the cause of such a regression is to perform a brute-force search through the project's history to find the particular commit that caused the problem. The git bisect command can help you do this:[...]
From: Finding Issues - Git Bisect.
Essentially you mark two revisions as start and end and hit:
$ git bisect
Then run the test and depending on whether it passed or failled call
$ git bisect good
or
$ git bisect bad
Respectively. git will do binary search, always cutting remaining revisions in half. I guess you can script it easily. If you are using svn, git can easily import the whole repository.
Building single revision at a time
This is not an answer but an advice - just test every single commit! Today you can cluster continuous integration servers easily, with a farm of servers it is not that hard to test all commits.

Maven execution to perform several plugins in sequence

I'm working on something that is using Hibernate for database access. I've got everything set up and working so that I can use mvn hibernate3:hbm2ddl to build the database schema, and I'm using mvn liquibase:update to populate initial data into the database (DBUnit was my first try but I couldn't get it to work with Oracle and Liquibase just worked first time).
My problem is that if I execute hbm2ddl to drop and re-create the schema then the Liquibase DATABASECHANGELOG tables are left intact, meaning that Liquibase won't re-create the data the next time it's run. To get around this I've configured up mvn sql:execute to perform a drop on the two tables in question, but this means that to be safe if I want to build the database from scratch I now need to execute "mvn hibernate3:hbm2ddl sql:execute liquibase:update"
What I'd really like is to be able to configure up something that will execute the sql:execute command when the hibernate3:hbm2ddl command is run, so that I know that doing that one command will leave me in a nice clean database state. Failing that, a configuration that will run a number of commands in sequence automatically, so I could configure up for example "mvn execute:db-rebuild" to run the three commands above automatically.
I've seen mention of mojo-executor but no examples on actually how to use it. I'm not even sure if it's the right tool for what I want...
Why don't you bind these different things to a particular thing like the integration-test phase. The order of the plugins will define the order of exectutions. Than you get rid of hand calling mvn ...

How to emulate 'git stash' in fossil, bzr?

Is it possible to emulate the behavior of 'git stash' when using fossil/bzr?
Basically I'm interested in handling the following workflow:
at some point the source code tree has state X, it is commited
I proceed to writing new code, I write it for a while and I see the
opportunity of a refactoring
I can't commit at this point, because the change I've started to make is not
completed, it is not atomic yet
at this point I would do 'git stash', would save the current work and would
get back to state X
I would do the refactoring and commit, source code now has state Y
I would merge source code in state Y with code in stash, complete the change
to make it atomic, then commit once again, pushing the source code to state Z
I think that generally it is possible to emulate this scenario when using
another SCM by branching the code in state X instead of doing 'git stash',
doing the refactoring in that branch, then merging the branch back into the
main one. But I'm aware that branching is not always a cheap operation. So are
there any better particular approaches that eventually rely on specific
features of fossil/bzr?
Use bzr shelve and bzr unshelve commands.
You can use the patch command of your system.
First you make a "stash" by storing a generated diff as .patch file:
$scmtool diff > working.patch
then reset your working directory.
later, apply the patch with:
patch -p1 --dry-run < working.patch
and then this works, remove the --dry-run to apply the patch for real.
The stash command was implemented in fossil recently. You got to check out latest fossil executable you will see stash in the available command list.
Here is the link to the web help on its syntax.