Does anyone has the experience of using the new p4 replicate command in their Perforce back-up /restore script? - backup

we recently performed an upgrade of our whole perforce system to 2009.02
During this exercise, we noticed that the back-up /restore process that was installed here by the Perforce consultant a year ago was not completely working. Basically, the verify command has never worked (scary !).
As we are obliged to revisit our Back-Up/Restore scripts, I was toying with the idea of using the new p4 replicate command. The idea is to use it alongside an rsync of the data files, so that in case of crash we will lose at worst an hour of work (if we execute them every hour).
Does anyone has the experience or an example of back-up/restore scripts using the p4 replicate command of the 2009.02 version ?
Thanks,
Thomas

Your question is almost one year old, but if you are still curious, I implemented exactly what you are talking about. I did it around the time you asked the question, but I didn't know about this site.
rsync'ing along metadata replication works great - we still use it in production.
The replica age varies, but most of the time it is less than several minutes.
--Mike

FWIW, in the latest version 2010.2 the p4 pull command copies the versioned files as well as the metadata, so there's no need for a separate rsync.

Related

Multi-Version Code Support in Git

I have been working on a big SQL based project that is taking an increasing amount of time and effort to maintain its versions. Lets keep it simple. I have three folders for each version of the code called Ver1, Ver2, and Ver3. All three version folders have the exact same filenames within it, but their content differs from version to version. If I make a change to a particular file in Ver3 that exists in Ver2 and Ver1, how can I use Git not to necessarily make the same changes in those other versions (not always practical due to partial rewrites for performance or logic changes), but to let me know that the other two versions of the file need to be updated in order to catchup to the Ver3? If Git isn't suited for this task, or if you have any experience with a similar issue, I would much appreciate any suggestions.

Tiny CLion projects slow to build a second time

I'm a Community College instructor grading student C++ coding assignments. Been doing the same task all semester. Suddenly, this morning, CLion is building extremely slowly, perhaps even hanging, the second time I build/run a project. WTF? The projects are very small. One source file, one header, no libraries.
What changed? And why would a second build be the problem? It's usually first builds that are slow.
What changed? My harddrive backup software. I told my auto-backup to take 2 hours off and have had no problem since.
I was recently forced to switch from the Code42 CrashPlan software I've been using for years to Carbonite. Code42 is getting out of the end-user backup business.
Note that I believe this problem is at least 95% user error, in how I configured my backup, and max 5% anything to do with Carbonite's implementation. Maybe their file locking strategy is different from CrashPlan's; I don't know.
I did think twice before I configured my CLion Projects folder to be backed up. I knew that backing up object files would be a waste of backup cycles/space. But I was in a hurry and wanted my solution source code to get backed up by Carbonite until I can check it into a repository of some sort at the end of the semester. I'm pretty sure I can go in and refine my backup strategy to NOT include the object/executable folders.

Xcode mixing Branches

This may sound strange, but I am having issues with Xcode mixing branches, or at least messing everything up.
I created a new branch (v1.4), then created a new data model and renamed an entity. Had to switch back to previous branch (v1.3) to check something and I get errors at run time on v1.3, it's looking for the new entity name from Branch v1.4 - what the %$^#% is happening. I searched the files, the new entity name is nowhere in v1.3.
I switch branches again to v1.2 and it ran fine. So, switched to V1.3 again - nogo. Switched back to v1.2 and it has the same issue now, runtime error because it can't find the new entity name.
What is happening? Anyone else have this issue?
Any thoughts/suggestions would be greatly appreciated.
OS X 10.11.6
Xcode 7.1.1
==[EDIT]===
I am not real familiar with GIT, just starting to learn. I ran the couple commands as mentioned, get nothing for either git diff commands.
Running git status --ignored I do get multiple files as untracked - still working on understanding why (Separate issue) - couple object files and 2 data models (Was 3, but manually added one to commit.
Also I get 3 ignored files:
.DS_Store
projectName.xcodeproj/project.xcworkspace/xcuserdata/
projectName/.DS_Store
That's as far as I've gotten. Not familiar enough with git to know if these ignored files are the ones I should delete.
Second option - will restoring from time machine fix this? It may be a little extra work for me to recreate v1.4 but probably less time than I've already spent trying to figure out how to fix it.
I do appreciate both comments so far - thank you.
==[SECOND EDIT]==
Thanks again for your comments.
However, do to time and schedule I perform a Time Machine backup before ElpieKay posted the last comment, so I will not be able to test it.
Reverting back did "fix" it as you'd expect, but I did lose several hours of work but life happens. I will keep this for if/when this happens again and try the git clean -df to see it fixes it.
On a side note - while I was switching back and forth between V1.3 and V1.4 trying to figure this out, 2 of the model versions disappeared on v1.4 - i.e. the name turned red in Xcode and when I viewed the contents of the file they were missing. I do not know if this is related or not, but I thought I would mention it. This happened one other time and I thought maybe I did something - I did a time machine restore to fix it last time. Wonder if git clean -df would have fixed it.

Design principles as to how linux repository managers update themselves?

I know there are other applications also, but considering yum/apt-get/aptitude/pacman are you core package managers for linux distributions.
Today I saw on my fedora 13 box:
(7/7): yum-3.2.28-4.fc13_3.2.28-5.fc13.noarch.drpm | 42 kB 00:00
And I started to wonder how does such a package update itself? What design is needed to ensure a program can update itself?
Perhaps this question is too general but I felt SO was more appropriate than programmers.SE for such a question being that it is more technical in nature. If there is a more appropriate place for this question feel free to let me know and I can close or a moderator can move.
Thanks.
I've no idea how those particular systems work, but...
Modern unix systems will generally tolerate overwriting a running executable without a hiccup, so in theory you could just do it.
You could do it in a chroot jail and then move or something similar to reduce the time during which the system is vulnerable. Add a journalling filesystem and this is a little safer still.
It occurs to me that the package-manager needs to hold the package access database in memory as well to insure against a race condition there. Again, the chroot jail and copy option is available as a lower risk alternative.
And I started to wonder how does such a package update itself? What
design is needed to ensure a program can update itself?
It's like a lot of things, you don't need to "design" specifically to solve this problem ... but you do need to be aware of certain "gotchas".
For instance Unix helps by reference counting inodes so "you" can delete a file you are still using, and it's fine. However this implies a few things you have to do, for instance if you have plugins then you need to load them all before you run start a transaction ... even if the plugin would only run at the end of the transaction (because you might have a different version at the end).
There are also some things that you need to do to make sure that anything you are updating works, like: Put new files down before removing old files. And don't truncate old files, just unlink. But those also help you :).
Using external problems, which you communicate with, can be tricky (because you can't exec a new copy of the old version after it's been updated). But this isn't often done, and when it is it's for things like downloading ... which can somewhat easily be made to happen before any updates.
There are also things which aren't a concern in the cmd line clients like yum/apt, for instance if you have a program which is going to run 2+ "updates" then you can have problems if the first update was to the package manager. downgrades make this even more fun :).
Also daemon like processes should basically never "load" the package manager, but as with other gotchas ... you tend to want to follow this anyway, for other reasons.

Mercurial practices: use with IDEs and scalability

I am not an experimented user of SCM tools, even though I am convinced of their usefulness, of course.
I used some obscure commercial tool in a former job, Perforce in the current one, and played a bit with TortoiseSVN for my little personal projects, but I disliked having lot of .svn folders all over the place, making searches, backups and such more difficult.
Then I discovered the interest of distributed SCM and I chose to go the apparently simpler (than git) Mercurial way, still for my personal, individual needs. I am in the process of learning to use it properly, having read part of the wiki and being in the middle of the excellent PDF book.
I see often repeated, for example in Mercurial working practices, "don't hesitate to use multiple trees locally. Mercurial makes this fast and light-weight." and "for each feature you work on, create a new tree.".
These are interesting and sensible advices, but they hurt a bit my little habits with centralized SCM, where we have a "holy" central repository where branches are carefully planned (and handled by administrators), changelists must be checked by (senior) peers and must not break the builds, etc. :-) Starting to work on a new branch takes quite some time...
So I have two questions in the light of above:
How practical is it to do lot of clones, in the context of IDEs and such? What if the project has configuration/settings files, makefiles or Ant scripts or shell scripts or whatever, needing path updates? (yes, probably a bad idea...) For example, in Eclipse, if I want to compile and run a clone, I have to do yet another project, tweaking the Java build path, the Run/Debug targets, and so on. Unless an Eclipse plugin ease that task. Do I miss some facility here?
How do that scale? I have read Hg is OK for large code bases, but I am perplex. At my job, we have a Java application (well, several around a big common kernel) of some 2 millions of lines, weighting some 110MB for code alone. Doing a clean compile on my old (2004) Windows workstation takes some 15 minutes to generate the 50MB of class files! I don't see myself cloning the whole project to change 3 files. So what are the practices here?
I haven't yet seen these questions addressed in my readings, so I hope this will make a useful thread.
You raise some good points!
How practical is it to do lot of clones, in the context of IDEs and such?
You're right that it can be difficult to manage many clones when IDEs and other tools depend on absolute paths. Part of it can be solved by always using relative paths in your configuration files -- making sure that a source checkout can compile from any location is a good goal in itself, no matter what revision control system you use :-)
But when you cannot or dont want to bother with several clones, then please note that a single clone can cope with multiple branches. The "hgbook" emphasizes many clones since this is a conceptually simple and very safe way of working. When you get more experience you'll see that you can use multiple heads in a single repository (perhaps by naming them with bookmarks) to do the same.
How do that scale?
Cloning a 110 MB repository should be quite fast: it depends on how long it takes to write 110 MB to your disk. In a recent message to the Mercurial mailinglist it was reported that cloning 6.3 GB took 4 minutes -- scaling that down to 110 MB gives about 4 seconds. That should be fast enough that your tea is still warm :-) Part of the trick is that the history data are simply hard-linked (yes, also on Windows) and so it is only a matter of writing out the files in the working copy.
PhiLo: I'm new at this, but mercurial also has "internal branches" that you can use within a single repository instead of cloning it.
Instead of
hg clone toto toto-bug-434
you can do
cd toto
hg branch bug-434
hg update bug-434
...
hg commit
hg update default
to create a branch and switch back and forth. Your built files not under rev control won't go away when you switch branches, some of them will just go out of date as the underlying sources are modified. Your IDE will rebuild what's needed and no more. It works much like CVS or subversion.
You should still have clean 'incoming' and 'outgoing' repositories in addition to your 'work' repository. Just that your 'work' can serve multiple purposes.
That said, you should clone your work repo before attempting anything intricate. If anything goes wrong you can throw the clone away and start over.
Question 1:
PIDA IDE has pretty good Mercurial integration. We also use Mercurial for development itself. Personally I have about 15 concurrent clones going of some projects, and the IDE copes fine. We don't have the trouble of tweaking build scripts etc, we can "clone and go".
It is so easy that in many cases I will clone to the bug number like:
hg clone http://pida.co.uk/hg pida-345
For bug #345, and I am ready to fix.
If you are having to tweak build scripts depending on the actual checkout directory of your application, I might consider that your build scripts should be using some kind of project-relative path, rather than hard-coded paths.