I'm currently experimenting with Bazaar, trying to find a configuration that works well for the project I am currently working on with a team of developers. Our model is currently centralised but I am open to persuasion about changing this if I can see significant benefits. Despite advocating it, I will not be able to change the version control system... some people don't know what's good for them.
At present, the project is divided into tiers. It's a web project and consists of a data access layer, a business/domain logic layer and a web layer (and a bunch of other application level projects such as services that that sit on the domain).
At present I find a bazaar repository with a number of working trees in there, one for each of the tiers I have mentioned.
Question part 1
Is there a better alternative to using working trees inside a respository for each tier?
What I have considered:
Putting everything into one fat branch/working tree (I am told this was purposely avoided, because of the necessity to check out everything). Developers are creating their own local setups by checking out subsets of the multiple repositories. For example if I am working on one of the services, I checkout the service, the business layer and the data access layer to a local directory. In the current setup though, I can checkout a top tier application which doesn't affect anything else, make a change to it and commit it back, without checking out the entire repository, which is nearly 1GB in size.
What I would like to remedy:
The problem is really that my web tier is reliant on a version of the
business layer, which in turn is reliant on the data access layer.
With a project organised like this, I have no idea which version of
the business and data access layers were current for a given commit
on the web layer. This feels very bad to me. I want to be able to
checkout versions of our application relative to some commit to one
of the working trees. At the moment we are attempting to keep track of
this across all of the working trees with tagging, but it feels
pretty wrong to me.
Question part 2
If I branch one of these tiers for a release branch, and find that a change in the root of the branch needs to be in that release, how do I push only those required changes into the branch?
So assuming that the structure of one working tree (or possibly a real branch) exists for each of these projects, we would like to have a branch for each of these, which contain a version for the a particular release. So the website tree has a website_rls1 branch, which encapsulates the state of development for that particular release. General development for a later release is going to happen concurrently. So if one file out of many happens to be relevant to this release, I would like to merge that single file into my release branch. What is the preferred method of doing this. As I understand it, Bazaar does not support merging/pulling individual files.
What I have considered:
Just putting the files I want into my local checkout of the release branch and committing
What I would like to remedy:
Following this consideration is going to kill off version information for this file. I want to keep the version information in tact, but only merge in a subset of changes. I essentially want to push individual files up to the child branch, or something to the effect of.
Any thoughts appreciated.
#1 There are two plugins for managing multiple dependent bzr repositories:
https://launchpad.net/bzr-externals
https://launchpad.net/bzr-scmproj
They have different approaches and are suited to different situations.
You may also want to consider solving this problem at the build level. Use a continuous integration system like Jenkins and a dependency resolution system like Ivy or Maven. QA should be done using automated builds from this system so that bugs that are filed can refer to a particular build number which should include in its logs the versions of the various dependencies used to produce that build.
#2 There isn't really a way to do that merge. Bazaar doesn't track cherry-pick merges (yet). A better approach would be to make the original fix in the release branch and then merge it back into its parent.
Related
Background
1 x Dev SQL Server
1 x UAT SQL Server
1 x Prod SQL Server
Developers use SSMS to view SQL Server objects and code and make changes directly to these objects in SQL Server itself.
Challenge
We have multiple developers potentially making changes to the same database object (let’s say a stored procedure or a view). The challenge arises from different bits of work happening on the same object where the delivery timescales for release for each of the bits of work are different. This means we end up with someone having completed their changes on the dev object, but releasing the changes into the next environment along may fail as the view (for example) contain may another developer’s changes too, and those changes themselves may require other objects. The business may not be expecting that other’s developer’s work to be released anyway, as there may be days/weeks of effort still to put into it before release. But that doesn't help the developer who's ready to go into the next environment.
How do we get round that?
How should each developer have started off, before they started making changes, to avoid dependency issues when releasing?
How can a developer “jump the queue” and release their bits of work, equally without scuppering anyone starting off their particular change too.
This is not a perfect answer, nor is it the only potential answer - but it's a good start. It's based on my experience within a relatively small shop, where tasks are re-prioritised frequently and changes required after testing etc.
Firstly - it's about process. You need to make sure you have a decent process and people follow it. Software etc can help, but it won't stop people making process errors. There are a lot of products out there to help with this, but I find making small steps is often a good start.
In our shop, we use Git source control for managing codes and releases. These script the entire database structure and views/etc, and are used to manage any changes to those scripts.
In general, we have a 'release' branch, then 'feature' branches for updates we're working on, and 'hotfix' branches for when we do changes to live on the fly (e.g., fixes etc).
When working on a specific branch, you check out that branch and work on it. Any change to the database has to go into an appropriate branch.
When ready to go live, you merge the feature/hotfix branches into that release branch when they're released. This way the 'release' branch always exactly matches what is on the production database.
For software, we use Redgate Source Control integratated with SSMS, but there are definitely others available (e.g., ApexSQL Source Control). You can also do it manually, but I wouldn't suggest it.
You don't have to, but you can also use a git GUI (e.g., SourceTree) to manage your branching and merging etc.
There are additional software products that can help to manage releases/etc (including scripting etc) but the source control aspect should be the biggest help with the main issue (being able to work on different things and helping ensure no clashes).
Regarding Git and how to use it (or SVN etc) - if you haven't used them before, they're a bit weird and take some getting used to. We had a few re-starts with a few different processes before we came up with an approach we liked. It will also take some time to run into the different issues that can arise - so you cannot expect this to just fix it out of the box.
1 source control
Any source control system GIT/TFS to manage your code and control changes
2 branching/release strategy
Git Flow! F.e. main branch with current working source code (main, develop whatever you call it), each developer works on his own feature branch, after he done his work he test it by deploying on DEV environment and running tests. After that it could be merged into release branch that will go live on PROD.
Also you need to consider merge vs rebase strategy (some link).
3 and some SCRUM
The most basic: 2 weeks for sprint, after end of the sprint you create new release branch and deploy it on UAT for testing. During next sprint release is tested on UAT, developers work on tasks from new sprint. Deploy tested release on PROD, developers have there 3rd sprint and UAT is ready for new release to be deployed. And so on.
4 more then one DEV environments
Based on the number of developers you need more DEV environments.
I am working on Power Designer 16.1, the repository is installed on a Linux server.
I have an issue with the repository sometimes dropping my models, in most cases I am able to check-in the models to the repository and check-out correctly, but sometimes after I correctly check-in a model it seems that it is being deleted from the repository.
I am not sure but I think the problem may occur when two (or more) people tries to check-in the same model at the same time (or overlapping), and that causes the model to disappear from the repository.
Any idea or workaround to fix this?
Does reading logs of PowerDesigner can help to clarify the reason?
Maybe repository has it's own server logs?
First make sure you upgrade to the latest version. They fix tons of bugs in each patch, and a lot of them have to do with the repository. It is very likely that this will fix it.
But we still found it best to have ONE designated person do check-in on models, to avoid strange issues.
If you use any replication between models, the repository will act even stranger than it already does, so if you can, try removing replicating properties from your models.
Apart from that, we make daily database backups of the repository and we sometimes have to use them too.
Another thing that may help: in the PowerDesigner Repository Administration menu you can select to rebuild indexes, if you have the logon/password for the account that installed the repository. Rebuilding indexes has solved several nasty prob
And now a bit off-topic: in my experience (working with the repository for 3+ years now), it has always been very buggy. Even with the highest patch level installed we have lots and lots of weird issues. This only gets worse when you have 2 repositories in order to test a new release of PowerDesigner. PowerDesigner's habit of hardcoding ID's into models make this very problematic.
I know someone who is often hired by SAP as PowerDesigner Instructor. His recommendation is to stay away from the repository and use Git. It has always been a badly integrated add-on with questionable support. The datamodel for the repository is horrible (which is quite ironic) and it seems that foreign keys and relational integrity are foreign concepts to it.
So we are now in the process of moving off the repository and onto TFS. All PowerDesigner models are XML, so TFS or Subversion or Git etc. will do nicely (but we like the locking of TFS/Subversion for this). Yes, we will have to forget about the nice display of changes, but frankly, having a robust version management instead of the buggy and model-destroying repository is worth it.
Here is the scenario.
We are developing a product where we have a base product and regional variations for the product. We have all the common code checked into the main trunk while we have created 2 branches (branch_us, branch_uk) for the variations off of the main trunk. There is common code that is constantly being checked into the main trunk and the code that is being checked into branch_uk,branch_us is dependent on the code that is checked into the main trunk. This is being done because we expect more regions to added in future releases and as a result we want to have max reuse as well as thin regional variations layer.
Based on the current strategy, the developer will have to develop locally and then manually check-in the common files into main_trunk and regional variations into branch_uk & branch_us. Then everytime code is checked into the main_trunk, we will have to perform a merge from main_trunk->branch_uk & main_trunk->branch_us before we can perform a build for branch_uk & branch_uk (two separate deployments) because of dependency of new code in branch_uk/us branch to the new common code in main_trunk. This model seems extremely painful to think about and unproductive.
I'm by no means an expert on TFS. Here is what I am seeking opinion on:
Is there a way TFS can dynamically pull changes into branch_uk/branch_us from the main_trunk without doing a manual merge after every check-in (in the main_trunk)?
Do you guys have any other recommendations on the code management process that might be more effective/productive than the current one?
Any thoughts and feedback will be much appreciated!
This seems like a weird architecture to me, but of course I'm coming at it from a position of almost total ignorance, so there might be a compelling reason to approach it that way.
That being said: It sounds to me like you don't have a single application with two regional variations, you have two separate applications that share a common ancestor. The short answer to your question is "No". A slightly longer answer is "No, but you could write code to automate it."
A more thoughtful question-answer is "Are you sure centralized version control is the right tool for the job?" It might be more intuitive to use Git for this. What you have are, in effect, a base repository and two forks of that repository. Developers can work against whatever fork makes sense, and if something represents a change that should apply to all localizations, open a pull request to have the change merged into the base repository. This would require more discipline on the part of the developers, since they would have to ensure that their commits are isolated such that they can open a pull request that contains just commits that apply to the core platform. Git has powerful but difficult history-rewriting tools that can assist. Or, of course, they could just switch back and forth between working on the core platform, then pulling changes from the core platform back up to the separate repositories. This puts you back to where you started, but Git merges are very fast and shouldn't be a big issue.
Either way, thinking of the localizations are a single application is your mistake.
A non-source control answer might involve changing the application's architecture so that all localizations run off of the same codebase, but with locale-specific functionality expressed in a combination of configuration flags and runtime-discoverable MEF plugins, or making a "core" application platform that runs as an isolated service, and separately developed locale-specific services that express only deviations from the core application platform.
I need some concrete ideas for this.
We're looking at changing our version control system, and I'd like for us to use Mercurial. It would ease a lot of pain related to some internal processes, as well as pose some challenges.
One of those challenges is that the version control system we're currently using is not a distributed one, and thus has the concept of revision numbers for each changeset, which we've used internally.
Basically, when the programmer checks in the final change that fixes a case in our case management system, Visual Studio responds with which changeset number this became, and the programmer then affixes that to the case which basically says "If you're running a version of our product with the last version number this value, or higher, then you have all the changes in that version."
However, with Mercurial that doesn't work, as revision numbers can and will change as commits come in from different branches.
So I'm wondering how I can get something similar.
Note, this is not about release management. Releases are much more controlled, but on-going tests are more fluid, so we'd like to avoid having a tester continuously trying to figure out if the test-cases on his list are really available in the version he's testing on.
Basically, I need the ability for someone testing to see if the version they're testing on has the changes related to a test-case, or not.
I was considering the following:
The programmer commits, and grabs the hash of the changeset
The programmer affixes this to the case in our case tracker
The build process will have to tag (not in the Mercurial way) the version so that it knows which changeset it was built from
I have to make it easy to take the hash of the changeset our product was built from, look it up in the changeset log of the repository that is used for our build machine, and then figure out if the product changeset is the same as, or an ancestor of, each case in the test list.
So I have two questions:
Is this a feasible approach? I'm not adverse to creating a web application that makes this easy to handle
Does anyone know of an alternate process that would help me? I've looked at tagging, but it seems that tagging ends up adding merge pressure, is that something I want? (ie. adding/moving a tag ends up as a commit, which needs to be merged with the rest of the system)
Is there anything out there that would help me out of the box, that is, have someone made, or know of, something like this already?
Any other ideas?
Is it right to say that you're looking for a lightweight tagging process linked to your build process?
I'm not keen on the idea of the programmer grabbing the last hash and sticking it somewhere else - sounds like the sort of manual process that you couldn't rely on happening. Would you be able to build a process around programmers adding the case number to their commit message so something could later link the commit to the original case? When the case was marked as "closed" you could pick up all commits against the case.
Lots of case control systems have this - Fogbugz, for example.
Both bitbucket and google code keep a branching timeline, that shows visually what has been merged, by who, and when. I suspect that this might be what you want to do: it's a very simple way to resolve issue 4.
How they do that, I don't know, but the tools are out there. BitBucket offers commercial code hosting.
Are there any requirements management plugins out there for Trac? I checked the list on Trac-Hacks and didn't see anything
I'm picturing some functionality like IBM's DOORS system. Basic features include revisioning and change control of requirements, requirements baselining etc.
Update: I suppose I could just use the wiki portion of Trac to document requirements, but this doesn't allow anykind of change sets for the requirements - for example where a single requirement has changed but developers are working against a baseline where the requirement hasn't been updated yet. It also won't allow linking requirements such that when a parent requirement changes, all dependent requirements must be reviewed before the change is accepted (in order to keep dependent requirements in sync).
#retracile: thanks for this "how to". I just discovered rmtoo which does text based requirements management... and btw, looking back to it, it has also VCS integration features and, as Trac, is written in Python.
When you start talking about revisioning of your requirements, I think you really need to step back and look at your requirements as part of your source tree. Find a file format for the requirements that an SCM can handle (text-based, etc), and just check them in. Treat them as the early part of your code... when it's time to figure out what the requirements will be for v2.0, create your v2.0 branch and develop the requirements on that branch, and follow it with your code development on that same branch.
If you use branch-based development, create the requirements on the branch, create the code on the same branch, and then merge the branch. That keeps requirements and implementation in sync.