We have an existing Trac installation for an old bunch of source code, and I'm creating a new Trac installation to support a new bunch of source code. Most of the info we've built up over time in the old Trac installation's wiki is equally relevant for the new Trac wiki.
Is there a quick way to migrate the wiki data from the old Trac to the new Trac?
Trac version = 0.10.4
Use trac-admin <trac-env> wiki dump <some-directory> to dump the wiki pages to a directory, then use trac-admin <new-trac-env> wiki load <some-directory> to load the wiki pages into the new environment.
Note that I don't think this will preserve wiki page history. If you want that, you can copy the database to the new instance and do a resync to the new repository.
I would also recommend upgrading to 0.11 if you can. 0.10 is no longer supported, and 0.12 is due out "soonish".
Disclosure: I'm one of the Trac devs
You should be able to use trac-admin to backup the wikis and then restore them on the other instance.
It's all stored in the database, as far as I can remember. You should just be able to export the data from the old database and into the new one.
Related
I want to use Pentaho for my work. After a bit of research I found that to store the ktr/kjb files I can have either database as a repository or I can use file system as a repository. However I don't find any benefits of using database as a repository over file system. The basic purpose of repository here is to create a common location where I can keep all the developed ktr/kjb files in production environment. Basically if I consider the database repository, it will hold all the developed ktr/kjb files in production and every time I need to run a job/transformation I will connect to database to get the respective ktr/kjb file (similar to how informatica stores transformation) on the other hand file based repository will be like a folder holding all the developed files.
Can somebody here will be able to explain pros and cons of both type of repository?
Please let me know if you need any other information.
Thanks in advance.
When several people develop on the same jobs/transformations, the database repository will hold the changes, and ensure the latest versions.
The pros of a filesystem is of course ease of backup, no database connection that can trouble you, and the possibility to use other, more modern and mature version control systems for the files, than the database repositories use.
If you are using the free community edition, I would definitely go with the file repository, along with external file-based version control and migration systems. If you are using the enterprise edition, then you might want to consider the database repository, since you can then use Pentaho's built-in version control and migration systems.
We have a productive GitLab 6.8.1 running. I've set up a parallel VM with GitLab 7.10.4. Now I want to move all data from the old installation to the new one. I've already found a way how to move the bare repositories, but I have no clue how to import the user account information, issues, etc.
EDIT: The thing is further complicated by the fact that the original installation was built from source, was running on Debian, used MySQL as a database and the whole installation was pretty much messed up. That's why I didn't manage to migrate the old server and decided to set up a new one. The new server is an Ubuntu machine with GitLab installed from apt-get package (I think that's Omnibus, but I'm not sure what this means.) The new installation seems to use PostgreSQL.
FYI You haven't specified whether the old or new server is running a source installation or omnibus, or whether you're running a MySQL or Postgres database. Instructions differ depending on these factors, so please clarify and I will update my answer.
The first thing is that you will need your old and new servers to be on the same version of GitLab. You cannot migrate anything other than repos without having synchronized versions.
Depending on your reply to the above you will either follow instructions similar to the backup and restore tasks or by running the backup and restore tasks. Both options generally require you to manually copy configuration files or migrate settings from multiple files to a single new file (in the case of going from a source install to Omnibus). The Omnibus upgrade guide above lists the configuration files that need to be migrated depending on your environment.
Update based on edited question: There's a guide specifically for that scenario in this section of the Omnibus upgrade guide, using Option 2. You still need to have the same version on both old and new servers, though, I believe.
can anyone give an detailed procedure on how to
migrate projects from gforge version(4.5) to teamforge version 5.2.0.
Migration includes source repository, bug tracking, wiki and discussions.
is it possible to shift all of them.
What's the best way to handle a situation like this?
Thank
You
Teamforge uses Subversion, so assuming you have commandline access to the server then you can use the svnadmin dump and svnadmin load commands.
You will need to run these commands for each repository.
Some (I don't know which) of the pages and wiki are also stored in subversion repositories, so you may be able to migrate that using the same method.
So we have this project which uses Mantis as bug tracker tool and in the company the corporate bug tracking tool is Bugzilla. This means we will have to use Bugzilla soon.
I searched for tools that can be used to migrate from Mantis to Bugzilla and I only found this m2bz tool which seems to work for Mantis 0.17.5 and Bugzilla 2.16.3 but seems also kind of dead since 2003...
Do you guys have already try to do such a migration? The Mantis version used is 1.1.8 and the Bugzilla one is 3.0.1.
Thanks in advance!
You can load data with importxml.cgi, which implies that you only need to dump your existing database into the proper XML for the migration.
We usually migrate other Bugzilla installations into our big Bugzilla database with a script that copies the data from one db to another, mapping bug ids, users, etc. I tried to do much the same thing when I had to migrate JIRA stuff. It turned out to be a major PITA!
I would have been much, much better off working on how to dump JIRA in the correct XML.
The data model changed a lot between 2.16 and 3.X, so whatever m2bz tool you found probably won't do what you want.
Do you know the datbase structure for Mantis?
Because you can import sql scripts in the BugZilla database, to make the migration yourself.
The Bugzilla database is:
http://www.faqs.org/docs/bugzilla/dbschema.html
I couldn't find it for Mantis though :-(
I have a trac repository available on a local network and need to take a dump of the trac data to be able to access it out of that network.
Can anyone suggest a way to do it ?
I don't think there is a unified command to dump both the trac project and a possible attached svn repo, but for separate dumping, Trac has a hotcopy command and svn a dump command.
Trac Backup - The Trac project
svn dump manual
Good set of answers on serverfault for this topic - https://serverfault.com/questions/6147/how-do-i-backup-my-trac-instalations