Perforce: How to create a new stream at a specific changelist - branch

I have a streamed depot with a mainline and development branches. We're currently at CL #10.
I want to create a new branch something like //project/side_branch but populated with files from CL #5. Effectively a snapshot of the mainline branch at an older changelist.
I can't figure out how to do this, when I create a new stream it automatically populates it with the latest version from mainline. Any help would be appreciated :)

Create the new stream:
p4 stream -t development -P //project/main //project/side_branch
and populate it from the desired changelist:
p4 populate -S //project/side_branch -r #5

Related

mass update in table storage tables

Is there a way to mass update the TableStorage entities?
Say, I want to rename all the Clients having "York" in the City field to "New-York".
Is there some tools to do it directly (without the need to writing code)?
You could try to use Microsoft Azure Storage Explorer to achieve it.
First, you have some entities in table storage with a City field in your Storage Explorer.
Then you could click Export button to export all your entities to a .csv file.
Enter Ctrl + F and choose Replace item.
Fill the find and replace item with what you want then choose Replace All.
Finally, go back to the Storage Explorer and click Import button to choose the .csv file you have edited before.
I wanted to do the trick with export/import but it's a no go when you have millions of records. I exported all the records and ended up with ~5gb file. Azure Storage Explorer couldn't handle it (my pc i7, 32gb ram).
If someone is also struggling with similar issue, you can do as follow:
Export records to csv file
Remove the lines that you don't want to modify (if needed). You can use grep "i_want_this_phrase" myfile > mynewfile or use -v option to find all that doesn't match the given phrase. If file is too large, split it with some command eg. cat bigFile.csv | parallel --header : --pipe -N999 'cat >file_{#}.csv'
Remove everything except the RowKey column.
Prepare az cli command similar to az storage entity merge --connection-string 'XXX' --account-name your_storage -t your_table -e PartitionKey=your_pk MyColumn=false MyColumn#odata.type=Edm.Boolean RowKey=. Remember about odata.type. At first I did an update without this and instead of bools, I switched to strings. Luckily it was easy to fix.
Open the file in VSC, select all with ctrl+a, then shift+alt+i to put a cursor at the end of all lines and then paste previously prepared az cli command. This way you will get a list of az cli updates for each RowKey.
Add #!/bin/bash at the beginning of the file, save as .sh, modify privileges chmod +x yourfile and run.
Of course if you want, you can create some bash script for that and read a file line by line and execute az command. I just did it my way as it was much simpler for me, I'm not so experienced in bash, so it would take me a while to dev&test the script.

Notifications on next ssh login

This is a hypothetical question because I'd like to know if it's even possible before I delve in to scripting it, but is it theoretically possible to have the output of a script/process (in particular one run via cron for instance) spit out in to terminal on the next ssh login?
Some pseudocode that I hope illustrates my point:
#!/bin/bash
# Download latest example of a database (updated automatically and periodically)
wget -mirror "http://somedatabase/database_latest
# Run a command that generates an output for a set of files queried against the latest database)
for file in /some/dir/*;
do
command -output $file.txt -database database_latest
done
# Now for the bit I'm more interested in.
# If the database has been updated, the 'output.txt'
# for each file will be different.
# So, using diff...:
if [ diff $file.txt $file_old.txt == 1 ] # where file_old.txt is
# the output of the command the
# last time it ran for that file.
then
mv $file_old ./archive/ # Keep the old file but stash it in a separate dir
else
break
fi
# Make some report file from all of the outputs
cat *.txt > report.txt
So my question being, is it possible to have the script 'inform me' next time I log in to our server, if any differences were found for each file? There are a lot of files, and the 'report.txt' would become large quickly, so I only want to check it if differences are found.
How about this:
create three directories: new, cur, old
your weekly cronjob writes data to new. This script should delete everything from new before writing new data. Or else you won't be able to notice that a file is missing
cur contains the last version of the data that you looked at or consideret
old contains the previous version of the data
Each time you log on, run:
#!/bin/bash
# clear the archive
rm old/*
# move all the old files to the archive
cp cur/* old
# move all the new files to the location of the old
cp new/* cur
# show which files have changed between
diff -q cur old | tee report.txt
The diff-command will print which files are new, which are missing and which are changed. Output from diff will also be in report.txt. The cur-directory will contain all files from the last run and you can look closer at these in an editor, or you can compare them to the previous version in old. Note that if a file is missing in new, it won't be deleted from cur. The next time you log on, you will lose the contents of the old-directory. If you want to keep a history of all previous results, this should be managed by the weekly cronjob, not the login-script (you want to store a separate version each time you generate the data, not each time you log in)

How to get a list of files modified since date/revision in Accurev

I have created a workspace backed by some collaboration stream. The stream is updated regularly by team members. My goal is to take modified files in a given path and put them to another repository (do it regularly).
The question is how to create a list of files which were modified since a revision or date or ..? (I don't know which approach is the best.) The command line is preferable.
Once I get the file list I create an automating script to take the files from one place and put them to another.
accurev hist -s Your_Stream -t "2013/05/16 01:00:00"-now -a -fl
You can run accurev stat -m -fx and then parse resulting XML. element elements will have modTime attribute, which is the UNIX timestamp when the file was modified.

Monotonically increasing bazaar trunk revision numbers

I'm still figuring out how bazaar's revision numbering works. The workflow our team uses is basically:
bzr branch lp:project/trunk
# code,code,code
bzr commit ...
# code,code,code
bzr commit ...
bzr merge
# resolve, resolve, resolve
bzr push lp:project/trunk
I'd prefer it if the trunk revision numbering was stable and increased monotonically with each push. However, as I understand it, whoever does bzr merge; bzr push lp:project/trunk ends up renumbering the revision history of the trunk to whatever their local branch revision numbering was. This makes things very confusing for the team, because "trunk, revision 705" may change over time.
We could use global ids, but it's a little awkward to work with a long string like foo#example.com-20110224160420-nnob0vg2vdk0yjow.
Is there any way to arrange our workflow so that the trunk revision numbering scheme is stable and increases monotonically?
On the trunk on your central server, edit
<yourbranch>/.bzr/branch/branch.conf or ~/.bazaar/locations.conf or ~/.bazaar/bazaar.conf
add append_revisions_only=True
This branch's existing revision order will not change any more.
http://doc.bazaar.canonical.com/beta/en/user-reference/configuration-help.html#append-revisions-only
Edit: For launchpad you can try the following as John Arbash Meinel said:
At the moment, the only way to get a branch with that
option is during "bzr init".
bzr init --append-revisions-only
So you could:
1) have launchpad delete the existing branch
2) bzr init --append-revisions-only lp:...
3) bzr push lp:...
Not exactly optimal.
The other way to do it is to use sftp and do:
sftp bazaar.launchpad.net
cd ~user/project/branch/.bzr/branch
get branch.conf
Then outside of sftp, edit the file to add
append_revisions_only = True
put branch.conf
https://lists.ubuntu.com/archives/bazaar/2008q3/046797.html

Inverting a diff or patch || CVS diff

In CVS, my working copy (WC) is on a certain branch (which we'll call "foo"). There have been other changes checked into foo by another dev. I want to do a diff between my WC and the upstream state of foo. Normally, when working in the trunk (HEAD), I just do a cvs diff, and that's fine. But for some reason when doing a plain cvs diff in the branch, the diff is empty. When I try to use "cvs diff -r foo", the diff shows up, but it is inverted -- upstream additions are shown with minuses, and upstream removals are shown with pluses.
How can I either: (1) get CVS to diff "the other way" (plus for upstream additions), or (2) invert a patch (in general)?
maybe what you want can be done by using interdiff from the patchutils package.
I often use it this way to see what has changed on the TRUNK for a given file:
cvs diff -up -r1 givenfile | interdiff /dev/stdin /dev/null
If the principal purpose of this is to check "what's up" in the central repo, I suggest you get yourselves a functional CVS viewer/browser/web thingy where you can browse and see the latest changes before updating. But assuming all you have is command-line CVS, I will attempt to give you a solution anyway :)
So, what you have here is a branch foo that went from A -> B, where B is the state of the branch (on the server) after the other developer's checkin, and A is the state you last updated your working copy to.
When just doing a plain cvs diff in this situation, you'll see your local changes compared to A since A is what you have checked out. The local CVS state will show that each file comes from the A revision on the foo branch, and when diffing your CVS client will download that revision from the server. In your case I'm guessing you have no local changes since your cvs diff is empty.
Then, when you do a cvs diff -r foo you're diffing your local A (or A+changes) against the server's foo (which currently is at B) - and the changes required to get from the server's B to your A+changes is exactly the opposite of the other developer's check-in, plus your own local changes.
Now, if you really really want to know how B (or tip-of-foo) compares to A (the pristine version of whatever you currently have checked out), what I think you have to do is set a tag on your working copy, then diff that tag against the state of the branch. Something like this:
cvs tag pistos_temp1
cvs diff -r pistos_temp1 -r foo
# And clean up by deleting the tag afterwards:
cvs tag -d pistos_temp1
You can try export a file from the branch to some temporary file and then make diff between your working copy and this temp file. It seems to be the easiest way.