Consolidate Perforce `add` and `edit` file operations - automation

To automate Perforce staging, I face a dilemma that add and edit are two different ops and they work on files of different SCM status, i.e., "already under SCM or not".
This is different from git where staging is uniformly add.
I'd like to have something like pseudo-code:
filepath = '/path/to/myfile.ext'
if p4.is_under_scm(filepath):
p4.edit(filepath)
else:
p4.add(filepath)
or better yet, simply hide the detail with:
p4.staging(filepath)
How should I achieve this with calling p4 command-line program?. I'm not using any programming-language bindings right now.

You might want to use the p4 reconcile command, which automatically opens workspace files for an action that matches their current state relative to the depot.
Keep in mind if you go this route that reconcile operates only on unopened files that are different from the depot version, so it's meant to be used after making local modifications (this is different from the standard workflow where you open a file with p4 edit prior to editing it -- the idea is that you use reconcile to fix things after the fact if you've had to work disconnected or something like that). In addition, if you change your mind about what you're doing with the file (e.g. you delete the local copy after it's been opened for edit but before you submit), you may need to revert -k it and re-reconcile to ensure that it's open for the correct action.
For something that matches the pseudocode in your question, you probably want the p4 have command, which tells you if a local file corresponds to a depot revision (and if so which one). p4 edit only works on a file that you have, whereas p4 add would be for a file in your workspace that does not correspond to an existing depot file. (A very subtle point here -- it's possible for the file to map to a depot file despite not having been synced from the depot! If that's the case you'll hit a conflict when you go to submit your add.)

Related

Managing checkouts of same binary file in different branches in Perforce

How to prevent checking out / changing one binary file in different branches of the same content. Situations like: designers have edited some game level (*.umap binary file) in their branch. Programmes changed same file in their branch (for example - added some blueprint on this game level). So now we have three different versions of this file, one in master branch before all changes, one in designers branch without programmes changes, one in programmes branch without designers changes. And now we must merge designers changes and programmes changes into master branch, but we cant.
So the question is - how to organise right this situations? Maybe we can setup perforce to checkout binary file in multiply branches at the same time, or something like this? Thanks...
There are a couple of different ways to think about this.
If you don't want work to continue/begin in one branch, until changes from another branch have been merged in to it, you can use Helix (Perforce) Protections, to give users read-only access to the branch.
This means they will be able to open files for edit, but won't be able to submit their changes.
More info about protections is here:
https://www.perforce.com/perforce/doc.current/manuals/p4sag/chapter.security.html
The protections would need to be changed, when you are ready for work on the other branches to start.
If you want a file to be automatically checked out on all branches, each time someone checks it out on any branch where it exists, you would currently have to script this.
You could do it using the broker and a workspace for every branch, that has a view that just includes the files you want to be checked out everywhere.
The files would then need to be checked out in these workspaces and locked, so that other users can't submit to these branches until the locks are removed.
This is not trivial and may have a performance impact.
You might also be able to do it using pre-command triggers, if your server version is new enough.
If you want to go in to more detail about any of the above, I recommend you contact Perforce Technical Support.
Hope this helps,
Jen.

Possible to branch in Perforce without creating a new folder?

Is it possible to create branches in Perforce in a similar style to Git? I.e. without creating a new folder.
I would prefer for my client to manage the branches transparently whilst I work against a single copy of the directory tree on disk.
It seems awfully wasteful for the client to create an exact copy of the entire tree if you're only modifying say a couple of files. I much prefer Git's workflow in this regard.
If it's not possible using straight Perforce I'm happy to move to GitSwarm.
For info I'm running Perforce version 2015.1/1233444.
Possible yes, but with the centralized version of the system it involves a bit of 'magic'. Basically, the branch part doesn't need to involve the client at all anymore. Take a peek at p4 populate. That'll create another folder on the server, but won't do anything locally. Then you can edit your client workspace to map the branched files instead of the trunk files, and it'll just re-sync over top the files on your disk.
Now, having said that, if you wanted to take a look at our DVCS version of working, then you can just do "p4 switch -c " and it'll create a new branch locally, switch your workspace over to it (shelving any open current work in the process) and away you go.
My original answer was deleted because I thought a link was a better idea than repeating content. My mistake.
At any rate, I believe the DVCS features in Perforce Helix supply exactly the sort of thing you're after. In a blog I wrote in the subject (link here for reference) I explained how to create a new in-place branch with a single command:
p4 switch -c newBranchName
That will create a new branch with the name "newBranchName" and save any existing work in progress by default. To discover on which branch you're working you can use the switch command with the list argument as follows:
p4 switch -l
That would show you output like this, the asterisk showing that you're now working on the newBranchName branch.
newBranchName *
main
You can switch back and forth as you like, changing contexts as needed as often as you like. Your work in progress will continue to be saved on each branch in progress. When you're ready to merge your work back to main and push it back to the server, you can use the following sequence of commands:
p4 switch main
p4 merge --from newBranchName
p4 resolve –as
The first command switches back to the main branch, the second merges your work from the newly created branch into main, and the third resolves any potential conflicts automatically. If there are any conflicts that can't automatically be merged, then you can use the usual commands to walk through the resolution process.
Alternately, if you prefer to stick with Git, you can use that directly with our Helix Versioning Engine through our Git Fusion technology or use Git directly with our new GitSwarm technology. That is a pretty amazing option (in my opinion) as it makes it possible to mirror content automatically and bidirectionally between GitSwarm and the back end server. That way you get all the features of Git with GitSwarm (which itself is based on GitLab) and all the goodies from the rest of Helix.
Hope that helps!
If you use streams (Perforce's "managed" version of a branch, as opposed to doing completely ad hoc inter-file branching with arbitrary paths), it's pretty simple. As P4Gabe said, "switch -c" is a one-shot option on a local server.
On a shared server it's only a little more complicated because you have to do the "populate" explicitly (this is to keep naive users from accidentally branching lots of files lots of times on a shared server), but it's still only a few steps and it's something that you as an advanced user could script easily:
p4 stream -P (current stream) -t development (new stream name)
p4 populate -r -S (new stream name)
p4 switch (new stream name)
The equivalent is possible using ad hoc ("classic") branches as well if you have a good understanding of how client views work -- use populate to create the new branch, modify your client view to map the new branch into the namespace currently occupied by the old branch, and sync.
This blog post on what exactly "p4 switch" does might help if you're trying to engineer your own solution that's similar-to-but-not-quite the "switch" command: https://www.perforce.com/blog/150428/p4-switch-switching-it

What is the correct way to create branch in RCS, and do you need to set a lock first?

I am looking for best practices using branches in RCS.
I had read the man page for rcs and ci and also browsed at the following links:
http://www.gnu.org/software/rcs/manual/html_node/Concepts.html
http://www.gnu.org/software/rcs/manual/html_node/Quick-tour.html
Suppose i have revision 1.3 on tip of the trunk.
I now want to change file 1.2 (as 1.3 have several other changes I cannot yet use).
I understand I can create branch on revision 1.2 using ci -r1.2.1
My question are the follows:
1. Do I need to set a lock on the file? If so, on which revision?
2. If no lock set, I cannot use -u flag in order to keep the file in my local dir. In case I wish to do so, is it still possible without co the file again?
Side note: I feel RCS does not suit my company needs however migrating to another system is not my decision to make, so currently I need to keep working with it.
I'm looking for much the same thing, but seeing you've had no answers, I'll offer my current practice:
I use branches for development, not for keeping different variants going in parallel. The trunk is reserved for my best, presumably working, code on the and I try not to check in anything there that might break it. I branch the code when I want to start a line of development that will take some time, break it for a while, is an experiment I might have to be abandon, etc.
To start a new line of development I change the default branch to a new branch off the trunk rev that's to be the base of my code, and force a checkin onto that branch, with:
rcs -b1.2.1 foo.cpp
ci -f1.2.1 -l foo.cpp
Now I can dive in to developing the branch, and my next check-ins will go onto the new branch instead of onto the trunk. Whether you lock a revision or not is only relevant to whether you intend to modify the working file.
You're correct that you can't keep both revisions, trunk-tip and branch-tip in the same folder; they have the same file name. But you can check out one of them with a -p switch which forces the output to stdout (instead of to a local file) which you can then redirect into a sub-folder, or to a local file with a unique name.

Move file in one AccuRev workspace that has been edited in another workspace

We have a need to refactor a code base. The thing is that this will be done by one person and it would be desirable to avoid having the rest of the development team sitting idle while this job takes place.
We therefore tried the following scenario to see if it is possible to work in parallel.
Created file test.txt in directory first in developer A's workspace.
Promoted this file.
Updated developer B's workspace, thereby getting file test.txt
In A's workspace moved file test.txt to directory second.
Promoted this move.
In B's workspace edited file test.txt while it still resides in directory first (no update is made thereby emulating that work is done while refactoring is taking place).
Tried to promote and got a message saying that file test.txt had been modified (correct, file has been moved).
Tried to merge but got an error message saying that AccuRev can't merge since the file is missing in directory second (where it has been moved).
Tried to update B's workspace but that is not allowed since there is a modified file that needs to be merged first.
We are now stuck in a catch 22 situation.
We did try to place a fake file in directory second but that is not being recognized since this file does not belong to the workspace.
Has anyone out there tried something like this and gotten it to work?
It is of course possible to copy files but if there is a better way we would be grateful to hear about this. Or if this is a known bug or limitation in the tool.
We will contact also contact AccuRev support but I thought that I might be able to get some useful tips from the community.
Currently we are using AccuRev client 5.5.0.
Thanks for any suggestions on how to make the tool support this operation.
Referring to your steps 6 & 7: In AccuRev 5.5 after a file is edited and has a (modified) status you first have to keep before you can promote.
At step 8 you could try doing the merge from the Browse Versions view of the file. That way you can select any node to merge with, including the one that has been moved.
Step 9. An AccuRev update will not run successfully if one of the files to be updated is (modified). This is by design. You can keep the file so it has (kept)(member) status then run the update.
David Howland
After contact with AccuRev support the answer is that the only option available is to copy the file to some temp directory, revert the changes, update the workspace and copy the file into the new location in the workspace.
AccuRev will at least tell you which files you have to copy since they will be marked as modified.
I could experimentally verify David's remark to step 9 using AccuRev 5.5.
Let's assume that in the workspace of user A the file was moved and the move was promoted, while in the workspace of user B the file was modified and user B is about to promote his/her change.
Before the file is kept, it will not be possible for user B neither to merge nor to update. But after keeping the modified file the update is possible. The file is first marked as overlap, then the merge succeeds in the new location. Basically, this avoids creating a copy of the file, reverting it and restoring it in the new location after an update, which can be quite cumbersome, as AccuRev does not reveal easily where the move goes.
If user B promotes the modification before user A promotes the move, all goes smoothly, i.e. on update the moved file appears as overlap, but easily merges into the moved file in the new location.
Similar results are obtained when the two users have workspaces connected to different streams and the overlap occurs on a common parent stream. Only if the file is unkept, an error can occur (i.e. only if the move is promoted before the change). Then a simple keep allows to proceed as usual (update, merge, then promote).

Prevent file being overwritten

Imagine there are 3 or more independent locations where a file can be modified. These locations communicate to each other through email or mail (direct flash drive restoration). Though there is a big room for flow - to make simultaneous editing to the file and screw up things, this client won't change too much. He rather call everyone that he is working on the last update or tell the other guys that he is waiting for third guy's last update. Anyway, at some point after several exchanges, due to one of participants unintentional error THE LAST VERSION of the file eventually gets mixed up. From this point everyone searches for the last version BY LOOKING THE CONTENT of the file.
This client wants to have a central location (he has actually, that is his PC's some location) and let everybody (including himself) copy any new or suspected new file to this location but prevent file's last version being copied. From this location he has to easily copy, send or open the file and work.
So, here is my concept (2 steps):
step 1: I made an ad to the main application where this file is created or edited. This ad prompts the user to give a version number to the file with every invoked save command from the editing application. In fact the file can be re-saved multiple times but not considered modified (file attributes creation, save etc. do not have great meaning here). This said the user can cancel my ad-in but have saved the file, not saving a new file version.
step 2: multiple solutions:
solution A: I'm thinking to have a folder/file watch and prevent the last version of the file being overwritten. As you know, FileSystemWatcher will fire the change/delete etc., events AFTER FACT so, I have to back copy overwritten file after the fact (w/ some tricks).
solution B: have a database to store all version of files and built-in some shell extension to extract/view files from the database. Move all copied/pasted files to the database (my program folder) and restore latest file in working folder after watcher fires change/delete event.
solution 3: find out built-in windows tools (API etc.) to greatly rely on it with some programming.
Any ideas?
Thanks in advance.