Migrate clearcase to perforce - migration

I have a large quantity of clearcase data which needs to be migrated into perforce. The revisions span the better part of a decade and I need to preserve as much branch and tag information as possible. Additionally we make extensive use of symbolic links, supported in clearcase but not in perforce. What advice or tools can you suggest which might make this easier?

The first step is to decide if you need to migrate everything, or just certain key versions. If you only migrate the important versions (releases and major milestones) you'll end up with a much simpler history in Perforce, without losing anything important. Then ClearCase can be keep as a historical archive in case it is ever needed. (Unless IBM has changed things ClearCase licenses do not expire when maintainance runs out, you just lose the right to new upgrades and patches and acces to support)
Keep in mind that Perforce does not version control directories and does not keep a full per-element version tree - this means a 1:1 with exact results is going to be impossible. Recreating the important snapshots is a much more achievable goal; keeping everything may be impossible, as Perforce lacks features ClearCase relies upon.
To see what Perforce says about the miration, check out
http://perforce.com/perforce/ccaseconv.html
This explains the key differences and covers a few approaches you can take.

Start by doing a Google search on "clearcase to perforce conversion".
Then read the ClearCase to Perforce Conversion Guide.
Once you're done crying, you're going to have to decide (1) how much effort you can afford, and (2) what you really need to capture as part of the conversion. You're not going to get it all, so you might as well just focus on getting the important branches.
Another consideration would be to just capture the current state of each supported branch as a snapshot, import that into Perforce, and then turn off the old ClearCase server, saving it in a known good state for that day when you need to access something from the deep, dark, pre-Perforce days...

The other answers are outdated. Now you can import CC->Perforce with many options also preserving history.
http://www.perforce.com/sites/default/files/pdf/migration-planning-guide-clearcase-to-perforce.pdf

What you also have to keep in mind is the fact, that your importerscript may slightly commit in another sequence than the clearcase commits(maybe you are traversing dir, may be histories of files, etc.)
So, unless you gather all version information into a (large) database and sort them afterwards, you will end up with commits which are not very useful to look into(except of course history of single files). As you (hopefully) change your commit-policy to commit atomic changes into perforce, it will be visible when development started: The commits before just do not make any sense on a project scope.
So you really should think of leaving clearcase history behind. Tags/Branches creation is also a different problem, as you need your old configspecs for your old branches.
At the end you will get wrong filenames in old tags(as perforce do not support dir-vers.) so you will use clearcase for this(and it is very tricky to get the correct filename for each version of a file!).
The last problem you will encounter: importer run time:
if you have large VOBs(eg. 10 years, 50 GB size), you will wait days for the importer to gather all information and convert it to a nice shiny perforce repo. All this day your devteam will stop working.

Just a quick note on the one import I saw from ClearCase to Perforce.
As noted in the ClearCase to Perforce Conversion Guide:
Perforce supports atomic change transactions; ClearCase doesn't.
Note that labels are often used to simply denote a snapshot in time for a particular easily-specified set of files; this is inherently easy to do in Perforce without using a label, due to Perforce's use of atomic change transactions and file naming syntax.
For example, the state of all the files in //depot/projecta as of change 42 can be obtained with
p4 sync //depot/projecta/...#42
That means the ClearCase project that got imported was an UCM one, since the concept of baseline closely follows the one of global revision.
Only files with a baseline on them were imported, the other versions were discarded.

Related

Forking (postgre)SQL database structure

I have been developing a network security application for several years now, as the lead developer at my company. It is a split-architecture design, where one component resides on the customer's network, and the other component in our own cloud. We have developed our own custom versioning system that keeps both sides synchronized at each patch (per customer), but until now it has only allowed incremental changes to be made, and rollbacks are not possible.
We'd like to move to a forkable git-like solution for our code, so that we can develop and test multiple features simultaneously, but the thing that's holding us back from that is our database. We use PostgreSQL (currently 9.3.12), and I've written a custom script to calculate the deltas between the "old" and "new" database structure, each time we "make a patch". It spits out a list of SQL commands necessary to update the "old" database structure to look like the "new", including tables, functions, sequences, triggers, you name it. It's very elegant and pretty much never fails anymore, even with complicated deltas.
However, I realize that in order to have a git-like solution for this (check-out, check-in, merge changes into test and production code, etc.) while also keeping database changes in sync with application code, we'll need to have something a lot more advanced than just "old" vs "new". Note that we don't need to modify database data for the most part, only table structure, which is altered in place on existing customer databases.
So my question is this: Any ideas for a git-like SQL version control system, which allows forking and merging, and can be easily kept in sync with application code changes? Our custom tool is already a bit more advanced than some open-source tools we've looked into (such as sqlt-diff), and tools like Red Gate are a bit out of our price range as a startup (not to mention that I haven't heard anybody mention forking in context with Red Gate). We're open to writing a custom tool, if that's what we need to do, but we're scratching our heads about where to start with something like that. We know how to calculate deltas, but we don't know how to manage all those things across different forks.
Free or open-source tools, frameworks we can adapt, or general guiding principles for building such tools are all appreciated!
One way of solving this problem is with migrations. A couple of lightweight tools, but there are many others:
http://sequel.jeremyevans.net/rdoc/files/doc/migration_rdoc.html
https://flywaydb.org/
Rather than calculating deltas between versions after the fact, migrations can be used to evolve the schema in a controlled way. You can create feature-specific migrations that can be tracked (and forked/merged) along with the rest of your code.
Depending on how fancy you want to get, you may need to extend the default naming/numbering schemes.

How to properly manage database deployment with SSDT and Visual Studio 2012 Database Projects?

I'm in the research phase trying to adopt 2012 Database Projects on an existing small project. I'm a C# developer, not a DBA, so I'm not particularly fluent with best practices. I've been searching google and stackoverflow for a few hours now but I still don't know how to handle some key deployment scenarios properly.
1) Over the course of several development cycles, how do I manage multiple versions of my database? If I have a client on v3 of my database and I want to upgrade them to v8, how do I manage this? We currently manage hand-crafted schema and data migration scripts for every version of our product. Do we still need to do this separately or is there something in the new paradigm that supports or replaces this?
2) If the schema changes in such a way that requires data to be moved around, what is the best way to handle this? I assume some work goes in the Pre-Deployment script to preserve the data and then the Post-Deploy script puts it back in the right place. Is that the way of it or is there something better?
3) Any other advice or guidance on how best to work with these new technologies is also greately appreciated!
UPDATE: My understanding of the problem has grown a little since I originally asked this question and while I came up with a workable solution, it wasn't quite the solution I was hoping for. Here's a rewording of my problem:
The problem I'm having is purely data related. If I have a client on version 1 of my application and I want to upgrade them to version 5 of my application, I would have no problems doing so if their database had no data. I'd simply let SSDT intelligently compare schemas and migrate the database in one shot. Unfortunately clients have data so it's not that simple. Schema changes from version 1 of my application to version 2 to version 3 (etc) all impact data. My current strategy for managing data requires I maintain a script for each version upgrade (1 to 2, 2 to 3, etc). This prevents me from going straight from version 1 of my application to version 5 because I have no data migration script to go straight there. The prospect creating custom upgrade scripts for every client or managing upgrade scripts to go from every version to every greater version is exponentially unmanageable. What I was hoping was that there was some sort of strategy SSDT enables that makes managing the data side of things easier, maybe even as easy as the schema side of things. My recent experience with SSDT has not given me any hope of such a strategy existing but I would love to find out differently.
I've been working on this myself, and I can tell you it's not easy.
First, to address the reply by JT - you cannot dismiss "versions", even with declarative updating mechanics that SSDT has. SSDT does a "pretty decent" job (provided you know all the switches and gotchas) of moving any source schema to any target schema, and it's true that this doesn't require verioning per se, but it has no idea how to manage "data motion" (at least not that i can see!). So, just like DBProj, you left to your own devices in Pre/Post scripts. Because the data motion scripts depend on a known start and end schema state, you cannot avoid versioning the DB. The "data motion" scripts, therefore, must be applied to a versioned snapshot of the schema, which means you cannot arbitrarily update a DB from v1 to v8 and expect the data motion scripts v2 to v8 to work (presumably, you wouldn't need a v1 data motion script).
Sadly, I can't see any mechanism in SSDT publishing that allows me to handle this scenario in an integrated way. That means you'll have to add your own scafolding.
The first trick is to track versions within the database (and SSDT project). I started using a trick in DBProj, and brought it over to SSDT, and after doing some research, it turns out that others are using this too. You can apply a DB Extended Property to the database itself (call it "BuildVersion" or "AppVersion" or something like that), and apply the version value to it. You can then capture this extended property in the SSDT project itself, and SSDT will add it as a script (you can then check the publish option that includes extended properties). I then use SQLCMD variables to identify the source and target versions being applied in the current pass. Once you identify the delta of versions between the source (project snapshot) and target (target db about to be updated), you can find all the snapshots that need to be applied. Sadly, this is tricky to do from inside the SSDT deployment, and you'll probably have to move it to the build or deployment pipeline (we use TFS automated deployments and have custom actions to do this).
The next hurdle is to keep snapshots of the schema with their associated data motion scripts. In this case, it helps to make the scripts as idempotent as possible (meaning, you can rerun the scripts without any ill side-effects). It helps to split scripts that can safely be rerun from scripts that must be executed one time only. We're doing the same thing with static reference data (dictionary or lookup tables) - in other words, we have a library of MERGE scripts (one per table) that keep the reference data in sync, and these scripts are included in the post-deployment scripts (via the SQLCMD :r command). The important thing to note here is that you must execute them in the correct order in case any of these reference tables have FK references to each other. We include them in the main post-deploy script in order, and it helps that we created a tool that generates these scripts for us - it also resolves dependency order. We run this generation tool at the close of a "version" to capture the current state of the static reference data. All your other data motion scripts are basically going to be special-case and most likely will be single-use only. In that case, you can do one of two things: you can use an IF statement against the db build/app version, or you can wipe out the 1 time scripts after creating each snapshot package.
It helps to remember that SSDT will disable FK check constraints and only re-enable them after the post-deployment scripts run. This gives you a chance to populate new non-null fields, for example (by the way, you have to enable the option to generate temporary "smart" defaults for non-null columns to make this work). However, FK check constraints are only disabled for tables that SSDT is recreating because of a schema change. For other cases, you are responsible for ensuring that data motion scripts run in the proper order to avoid check constraints complaints (or you manually have disable/re-enable them in your scripts).
DACPAC can help you because DACPAC is essentially a snapshot. It will contain several XML files describing the schema (similar to the build output of the project), but frozen in time at the moment you create it. You can then use SQLPACKAGE.EXE or the deploy provider to publish that package snapshot. I haven't quite figured out how to use the DACPAC versioning, because it's more tied to "registered" data apps, so we're stuck with our own versioning scheme, but we do put our own version info into the DACPAC filename.
I wish I had a more conclusive and exhasutive example to provide, but we're still working out the issues here too.
One thing that really sucks about SSDT is that unlike DBProj, it's currently not extensible. Although it does a much better job than DBProj at a lot of different things, you can't override its default behavior unless you can find some method inside of pre/post scripts of getting around a problem. One of the issues we're trying to resolve right now is that the default method of recreating a table for updates (CCDR) really stinks when you have tens of millions of records.
-UPDATE: I haven't seen this post in some time, but apparently it's been active lately, so I thought I'd add a couple of important notes: if you are using VS2012, the June 2013 release of SSDT now has a Data Comparison tool built-in, and also provides extensibility points - that is to say, you can now include Build Contributors and Deployment Plan Modifiers for the project.
I haven't really found any more useful information on the subject but I've spent some time getting to know the tools, tinkering and playing, and I think I've come up with some acceptable answers to my question. These aren't necessarily the best answers. I still don't know if there are other mechanisms or best practices to better support these scenarios, but here's what I've come up with:
The Pre- and Post-Deploy scripts for a given version of the database are only used migrate data from the previous version. At the start of every development cycle, the scripts are cleaned out and as development proceeds they get fleshed out with whatever sql is needed to safely migrate data from the previous version to the new one. The one exception here is static data in the database. This data is known at design time and maintains a permanent presence in the Post-Deploy scripts in the form of T-SQL MERGE statements. This helps make it possible to deploy any version of the database to a new environment with just the latest publish script. At the end of every development cycle, a publish script is generated from the previous version to the new one. This script will include generated sql to migrate the schema and the hand crafted deploy scripts. Yes, I know the Publish tool can be used directly against a database but that's not a good option for our clients. I am also aware of dacpac files but I'm not really sure how to use them. The generated publish script seems to be the best option I know for production upgrades.
So to answer my scenarios:
1) To upgrade a database from v3 to v8, I would have to execute the generated publish script for v4, then for v5, then for v6, etc. This is very similar to how we do it now. It's well understood and Database Projects seem to make creating/maintaining these scripts much easier.
2) When the schema changes from underneath data, the Pre- and Post-Deploy scripts are used to migrate the data to where it needs to go for the new version. Affected data is essentially backed-up in the Pre-Deploy script and put back into place in the Post-Deploy script.
3) I'm still looking for advice on how best to work with these tools in these scenarios and others. If I got anything wrong here, or if there are any other gotchas I should be aware of, please let me know! Thanks!
In my experience of using SSDT the notion of version numbers (i.e. v1, v2...vX etc...) for databases kinda goes away. This is because SSDT offers a development paradigm known as declarative database development which loosely means that you tell SSDT what state you want your schema to be in and then let SSDT take responsibility for getting it into that state by comparing against what you already have. In this paradigm the notion of deploying v4 then v5 etc.... goes away.
Your pre and post deployment scripts, as you correctly state, exist for the purposes of managing data.
Hope that helps.
JT
I just wanted to say that this thread so far has been excellent.
I have been wrestling with the exact same concerns and am attempting to tackle this problem in our organization, on a fairly large legacy application. We've begun the process of moving toward SSDT (on a TFS branch) but are at the point where we really need to understand the deployment process, and managing custom migrations, and reference/lookup data, along the way.
To complicate things further, our application is one code-base but can be customized per 'customer', so we have about 190 databases we are dealing with, for this one project, not just 3 or so as is probably normal. We do deployments all the time and even setup new customers fairly often. We rely heavily on PowerShell now with old-school incremental release scripts (and associated scripts to create a new customer at that version). I plan to contribute once we figure this all out but please share whatever else you've learned. I do believe we will end up maintaining custom release scripts per version, but we'll see. The idea about maintaining each script within the project, and including a From and To SqlCmd variable is very interesting. If we did that, we would probably prune along the way, physically deleting the really old upgrade scripts once everybody was past that version.
BTW - Side note - On the topic of minimizing waste, we also just spent a bunch of time figuring out how to automate the enforcement of proper naming/data type conventions for columns, as well as automatic generation for all primary and foreign keys, based on naming conventions, as well as index and check constraints etc. The hardest part was dealing with the 'deviants' that didn't follow the rules. Maybe I'll share that too one day if anyone is interested, but for now, I need to pursue this deployment, migration, and reference data story heavily. Thanks again. It's like you guys were speaking exactly what was in my head and looking for this morning.

Why isn't database version control considered as important as application version control?

I've recently started using Kiln Source Control for all my projects VB.NET code, and I don't know how I managed without it!
I've been looking for a database source control, for all my stored procedures, UDFs etc. However, I've found that there is not as much available for database version control as there is for my web files.
Why is database version control not considered as important as my web files? Surely all the programming in my database is just as important as the code in my code-behind and .aspx files?
Version controlling database objects IS important!
However, maybe it isn't considered as important because some people see a database merely as a tool that assists them? And external tools (normally) aren't version controlled.
One thing I've found hard to manage is the release process. Right now we're using red gates source control connected to svn. When it's time for release we do the same as with the rest of the code: Merge from one branch to the other. Then to deploy it we use sql compare to create a diff script between the merged revision and the actual database. Aside from some minor quirks and beginners mistakes I think this works well in a environment where there is no downtime (purposefully ;)) and which has a high speed development process (lots of releases).
You can maintain your database artifacts in your version control system for same.
Version control system is for versioning of artifacts and Artifacts can be Program code or database. We used same VC for code and database.
VCS are designed to store versions of text. They can store binaries but it is less efficient. And the DB state itself is not a text and can't even be directly stored as a binary. You can store the SQL code though.
one solution is to store a full DB dump (SQL or binary), another - to store sequences of SQL scripts that change one DB state to the next one. The second approach can be automated in some environments and is there called migrations. if you want a separate specific VCS for a DB, you can think that migration tools are such VCS.
There are also tools that can compare two DB states and produce a diff that is able to change the first state to the second.
I suppose it depends on whether you manage the database changes (like schema changes, migrations as mentioned by wRAR) as part of source code repository, in the form of sql scripts or other formats through the use of other tools, or do you consider this as database administration, and do that using traditional methods of backup/restore.
In my experience so far, although I wouldn't consider database management as any less important, it does happen on a very low frequency as compared to actual code changes. Your case is clearly different, but a combination of script files and database tools should take care of that.
Here's the reality.
Database version control -- that is to say DDL, DML, and even data for necessary reference data required for an application to have basic functionality - is as important as all other application assets under version control. Databases should never be under any special exception where it is considered acceptable for their assets (objects and necessary reference data) to not be under version control. Ever.
So why have they been? Simple. The toolchains to keep managing those assets simple haven't always been up to snuff (in the case of SQL Server, prior to Visual Studio 2008 it wasn't shipped with first-party tools from Microsoft), and the toolchains differ on a vendor-by-vendor basis. When those toolchains are deficient, unless the organization steps up to cover that deficiency, that deficiency remains. It's technical debt, and some organizations do not prioritize it due to either time or (sadly) skill, when the tools to make it easy don't exist or require integrating third-party tools into the development workflow.
The worst is trying to bring older projects under version control, since you have to bring everyone kicking and screaming with you all at once, in addition to selling that value to the business. I won't disagree that there may be more pressing immediate needs of the business, but getting database assets under version control needs to be somewhere on that list, even if it's a lower priority.
There's no excuse. I've fought more than enough project managers, data architects, and even CIOs/CTOs on this -- I've even made it a point to have detractors kicked off of every project I'm working on. It needs to be done, and if it's not there needs to be a timeline that the business will agree to in which it will be done. Those who argue against it need to be shot in the face, and survivors need to be shot again.

Get my database under Version Control using a DVCS [Mercurial]

What would be the best approach for versioning my whole database ?
Creating a file for each database object (table,view,procedsure..) or rather having one file for all DDL scripts and any new change will be put in a separate file ?
What about handling changes made in a Database manager tool ?
I'd like to have a generic solutions for any kind of RDBMS.
Are there any other options ?
I'm a huge VCS fan in general and a big Mercurial booster, but I really think you're going down the wrong path.
VCSs aren't just about iterative changes, the "what", they're also about answering the "who", "when", and "why". For a database those answers are a lot less interesting or hard to provide to the VCS. If you're doing nightly exports and commits the "who" will always be "cron" and the "why" will always be "midnight".
The other thing modern VCSs do really well is helping you merge changes from multiple branches. That's less applicable in the database world. Very seldom do you say "I want this table structure, but this data", and if you do the text/diff merge isn't going to help you much.
The thing that does do "what" and "when" very well is an incremental backup system, and that's probably the better fit.
At work we use Tivoli and at home I use rdiff-backup and duplicity, but there are plenty of great options.
I guess my general rule of thumb is "if it was typed by hand by a human then it does into source control, and if it was generated/exported then it goes in the incremental backups"
Certainly you can make this work, but I don't think it will buy you much over the more traditional backup solutions.
Have a look at this post
If you need generic solution - put everything in the scripts (simple text files) and put under Version Control system (can be used any of VCS).
Grouping similar database objects into scripts will be depend on your requirement.
So you may for example:
Store table/indexes/ in one or several script
Each procedure store in individual script or combine small procedures into one script.
However need to remember one important thing with this approach: don't forget change scripts if you changed table/view/procedure directly in databases and don't create/recreate/compile you db objects in database after changing scripts.
SQL Source Control currently supports SVN and TFS, but Mercurial requests are increasing rapidly and we're hoping to have a story for this very soon.
We use UserVoice to measure demand so please vote accordingly if you're interesting in this: http://redgate.uservoice.com/forums/39019-sql-source-control

Why use database migrations instead of a version controlled schema

Migrations are undoubtedly better than just firing up phpMyAdmin and changing the schema willy-nilly (as I did during my php days), but after using them for awhile, I think they're fatally flawed.
Version control is a solved problem. The main function of migrations is to keep a history of changes to your database. But storing a different file for each change is a clumsy way to track them. You don't create a new version of post.rb (or a file representing the delta) when you want to add a new virtual attribute -- why should you create a new migration when you want to add a new non-virtual attribute?
Put another way, just as you check post.rb into version control, why not check schema.rb into version control and make the changes to the file directly?
This is functionally the same as keeping a file for each delta, but it's much easier to work with. My mental model is "I want table X to have such and such columns (or really, I want model X to have such and such properties)" -- why should you have to infer from this how to get there from the existing schema; just open up schema.rb and give table X the right columns!
But even the idea that classes wrap tables is an implementation detail! Why can't I just open up post.rb and say:
Class Post
t.string :title
t.text :body
end
If you went with a model like this, you'd have to make a decision about what to do with existing data. But even then, migrations are overkill -- when you migrate data, you're going to lose fidelity when you use a migration's down method.
Anyway, my question is, even if you can't think of a better way, aren't migrations kind of gross?
why not check schema.rb into version control and make the changes to the file directly?
Because the database itself is not in sync with version control.
For instance, you could be using the head of the source tree. But you're connecting to a database that was defined as some past version, not the version you have checked out. The migrations allow you to upgrade or downgrade the database schema from any version and to any version, incrementally.
But to answer your last question, yes, migrations are kind of gross. They implement a redundant revision control system on top of another revision control system. However, neither of these revision control systems is really in sync with the database.
Just to paraphrase what others have said: migrations allow you to protect the data as your schema evolves. The notion of maintaining a single schema.rb file is attractive only until your app goes into production. Thereafter, you'll need a way to migrate your existing users' data as your schema changes.
There are also data-related issues that are important to consider, which migrations solve.
Say an old version of my schema has a feet and inches column. For efficiency purposes, I want to combine that into just an inches column to make sorting and searching easier.
My migration can combine all of the feet and inches data into the inches column (feet * 12 + inches) while it's updating the database (i.e. just before it removes the feet column)
Obviously this being in a migration makes it automatically work when you later apply the changes to your production database.
As it stands, they're annoying and inadequate but quite possibly the best option we have available to us at present. Quite a few smart people have spent quite a lot of time working on the problem and this, so far, is about the best they've been able to come up with. After about 20 years of mostly hand-coding database version updates, I came very rapidly to appreciate migrations as a major improvement when I found ActiveRecord.
As you say, version control is a solved problem. Up to a point I'd agree: it's very solved for text files in particular, less so for other file types and not really very much at all for resources such as databases.
How do migrations look if you view them as version control deltas for databases? They're the sum of the deltas you have to apply to get a schema from one version to another. I'm not aware that even git, for all its super-powerfulness, can take two schema files and generate the necessary DDL to do that.
As far as declaring table content in the model, I believe that's what DataMapper does (no personal experience). I think there may be some DDL inference capabilities there as well.
"even if you can't think of a better way, aren't migrations kind of gross?"
Yes. But they're less gross than anything else we have. Do please let us know when you've completed the non-gross alternative.
I suppose given "even if you can't think of a better way", then yes, in the grand scheme of things, migrations are kind of gross. So are Ruby, Rails, ORMs, SQL, web apps, ...
Migrations have the (not insignificant) advantage that they exist. Gross-but-exists tends to win out over Pleasant-but-nonexistent. I'm sure there probably are pleasant and nonexistent ways to migrate your data, but I'm not sure what that means. :-)
OK, I'm going to take a wild guess here and say that you're probably working all by yourself. In a group development project the power of each individual to take responsibility for just his/her changes to the database required for the code that developer is writing is much much more important.
The alternative is that larger groups of programmers (e.g. 10-15 Java developers where I work) end up relying on a couple of dedicated full time database administrators to do that along with their other maintenance, optimization, etc. duties.