excluding a directory from accurev using pop command - accurev

I have 10 directories in a AccuRev depot and don't want to populate one directory using "accurev pop" command. Is there any way? .acignore is not suiting to my requirements because in another jenkins build I need that folder. Just want to save time to avoid unnecessary populate of directories.
Any idea?
Thanks,
Sanjiv

I would create a stream off this stream and exclude the directories you dont want. Then you can pop this stream and only get the directories you want.

When you run the AccuRev populate command you can specify which directories to populate by specifying the directory name:
accurev pop -O -R thisDirectory
will cause the contents of thisDirectory to match the backing stream from the time of the last AccuRev update in that workspace.
The -O is for over write and the -R is for recurse. If you have active work in progress the -O will cause that work to be over written/destroyed.
The .acignore is only for (external) files and not those that are being managed by AccuRev.
David Howland

Related

Create repository in non-empty remote folder

It's been 14 years since I last worked with svn and appearently I have forgotten everything...
I have an existing web-project, consisting of a bunch of php, html, js and other files in a directory tree on a V-Server. Now I want to take these folders under version control and create a copy on my local machine using svn. So I installed subversion according to these instructions: https://www.linuxcloudvps.com/blog/how-to-install-svn-server-on-debian-9/
Using the already-present apache2.
But now I kinda hit a roadblock. If I try svnadmin create on the existing folder, it tells me that is is not empty and does nothing really. All the questions and answers I find here and elsewhere are either
a) focussing on an already existing folder on the local machine
b) assuming more prior knowledge than I have right now aka I don't understand them.
Is there a step-by-step guide for dummies anywhere on how to do this? Or can anyone tell me in laymans terms how to do this?
I can't believe this case never comes up or that it is really very complicated.
At the risk of failing to understand your exact needs, I think you can proceed as follows. I'll use this terms:
Code: it's the unversioned directory at V-Server where you currently have the bunch of php, html, js and other files
Repository: it's the first "special" directory you need to create in order to store your Subversion history and potentially share it with others. There must be one and there can only be one.
Working copy: it's the second "special" directory you need to create in order to work with your php, html, js... files once they are versioned and it'll be linked to a given path and revision of your repository. At a given time there can be zero, one or many of them.
Your code can become a working copy or not, that's up to you, but it can never become a repository:
$ svnadmin create /path/to/code
svnadmin: E200011: Repository creation failed
svnadmin: E200011: Could not create top-level directory
svnadmin: E200011: '/path/to/code' exists and is non-empty
Your repository requires an empty folder but it can be located anywhere you like, as long as you have access to it from the machine you're going to use in your daily work. Access means it's located in your PC (thus you use the file: protocol) or it's reachable through a server you've installed and configured (svn:, http: or https:).
$ svnadmin create /path/to/repo
$ 😎
Your working copies can be created wherever you need to work with your IDE. It can be an empty directory (the usual scenario) or a non-empty one. The checkout command retrieves your files from the repo and puts them in the working copy so, at a later stage, you're able to run a commit command to submit your new and changed files to the repository. As you can figure out it isn't a good idea to create a working copy in random directories because incoming files will mix with existing files. There's however a special situation when it can make sense: when the repository location is new and is still empty. In that case you can choose between two approaches:
If you want code to become a working copy, you can check out right into in and then make an initial commit to upload all files:
$ svn checkout file://path/to/repo /path/to/code
Checked out revision 0.
$ svn add /path/to/code --force
A code/index.php
$ svn commit /path/to/code -m "Import existing codebase"
$ Adding /path/to/code/index.php
$ Transmitting file data .done
$ Committing transaction...
$ Committed revision 1.
If you don't care about code once it's stored in the repository or you want your working copy elsewhere, you can import your files from code and create a working copy in a fresh directory:
$ svn import /path/to/code file://path/to/repo -m "Import existing codebase"
Adding code/index.php
Committing transaction...
Committed revision 1.
$ svn checkout file://path/to/repo fresh
A fresh/index.php
Checked out revision 1.

Sync clients' files with server - Electron/node.js

My goal is to make an Electron application, which synchronizes clients' folder with server. To explain it more clearly:
If client doesn't have the files present on the host server, the application downloads all of the files from server to client.
If client has the files, but some files have been updated on the server, the application deletes ONLY the outdated files (leaving the unmodified ones) and downloads the updated files.
If a file has been removed from the host server, but is present at client's folder, the application deletes the file.
Simply, the application has to make sure, that client has EXACT copy of host server's folder.
So far, I did this via wget -m, however frequently wget did not recognize, that some files changed and left clients with outdated files.
Recently I've heard of zsync-windows and webtorrent npm package, but I am not sure which approach is right and how to actually accomplish my goal. Thanks for any help.
rsync is a good approach but you will need to access it via node.js
An npm package like this may help you:
https://github.com/mattijs/node-rsync
But things will get slightly more difficult on windows systems:
How to get rsync command on windows?
If you have ssh access to the server an approach could be using rsync through a Node.js package.
There's a good article here on how to implement this.
You can use rsync which is widely used for backups and mirroring and as an improved copy command for everyday use. It offers a large number of options that control every aspect of its behaviour and permit very flexible specification of the set of files to be copied.
It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination.
For your use case:
If the client doesn't have the files present on the host server, the application downloads all of the files from a server to the client. This can be achieved by simple rsync.
If the client has the files, but some files have been updated on the server, the application deletes ONLY the outdated files (leaving the unmodified ones) and downloads the updated files. Use: –remove-source-files or -delete based on whether you want to delete the outdated files from the source or the destination.
If a file has been removed from the host server but is present at the client's folder, the application deletes the file. Use: -delete option of rsync.
rsync -a --delete source destination
Given it's a folder list (and therefore having simple filenames without spaces, etc.), you can pick the filenames with below code
# Get last item from each line of FILELIST
awk '{print $NF}' FILELIST | sort >weblist
# Generate a list of your files
find -type f -print | sort >mylist
# Compare results
comm -23 mylist weblist >diffs
# Remove old files
xargs -r echo rm -fv <diffs
you'll need to remove the final echo to allow rm work
Next time you want to update your mirror, you can modify the comm line (by swapping the two file arguments) to find the set of files you don't have, and feed those to wget.
or
rsync -av --delete https://mirror.abcd.org/xyz/xyz-folder/ my-client-xyz-directory/

gsutil rsync only files matching a pattern

I need to rsync files from a bucket to a local machine everyday, and the bucket contains 20k files. I need to download only the changed files that end with *some_naming_convention.csv .
What's the best way to do that? using a wildcard in the download source gave me an error.
I don't think you can do that with Rsynch. As Christopher told you, you can skip files by using the "-x" flag, but no just synch those [1]. I created a public Feature Request on your behalf [2] for you to follow updates there.
As I say in the FR, IMHO I consider this to not follow the purpose of rsynch, as it's to keep folders/buckets synchronise, and just synchronising some of them don't fall in that purpose.
There is a possible "workaround" by using gsutil cp to copy files and -n to skip the ones that already exist. The whole command for your case should be:
gsutil -m cp -n <bucket>/*some_naming_convention.csv <directory>
Other option, maybe a little bit more far-fetched is to copy/move those files to a folder and then use that folder to rsynch.
I hope this works for you ;)
Original Answer
From here, you can do something like gsutil rsync -r -x '^(?!.*\.json$).*' gs://mybucket mydir to rsync all json files. The key is the ?! prefix to the pattern you actually want.
Edit
The -x flag excludes a pattern. The pattern ^(?!.*\.json$).* uses negative look-ahead to specify patterns not ending in .json. It follows that the result of the gsutil rsync call will get all files which end in .json.
Rsync lets you include and exclude files matching patterns.
For each file rsync applies the first patch that matches, some if you want to sync only selected files then you need to include those, and then exclude everything else.
Add the following to your rsync options:
--include='*some_naming_convention.csv' --exclude='*'
That's enough if all your files are in one directory. If you also want to search sub folders then you need a little bit more:
--include='*/' --include='*some_naming_convention.csv' --exclude='*'
This will duplicate all the directory tree, but only copy the files you want. If that leaves empty directories you don't want then add --prune-empty-dirs.

Backup file folder in correct way

My situation is I only have execute permission from some folder:
Lets say, I would like to backup entire folder and exclude some folder and files with exclude.txt
Here is path I would like to backup:
/pdf/data/pdfnew/2014
And I only have permission to execute from this folder (main):
/pdf/data/pdfnew/2014/public/main
I put exclude.txt in same folder which I can execute the command (main)
I execute this command in (main folder):
tar -cjvf -X exclude.txt 2014.tar.bz2 /pdf/data/pdfnew/2014
The result is it still included folder that I dont want to backup.
Is there a correct way doing this?
Do you have a user/home directory on that server? You should, so you should just place exclude.txt in your user/homedirectory on that server & run it like this from that directory:
tar -cjvf -X ~/exclude.txt ~/2014.tar.bz2 /pdf/data/pdfnew/2014
The ~/ is a shorthand for your user/home directory so in this case it is explicitly stating, “Read exclude.txt from the user/home directory & write ~/2014.tar.bz2 to the user/home directory.
But you also ask this:
Is there a correct way doing this?
There is never one canonical best way of doing something like this. It is all based on your final/end goal. Nothing more. Nothing less. That said, if I were you I would do it like this instead using the -C option:
tar -cjvf -X ~/exclude.txt ~/2014.tar.bz2 -C /pdf/data/pdfnew/ 2014
The uppercase -C option allows tar to internally change the working directory to /pdf/data/pdfnew/ so you can then create an archive of 2014 without having to have the whole directory tree retained in the backup. I find this is easier to work with because many times I want to backup the contents of a directory but have no use to retain the parent structure. That way the archive is more like a traditional ZIP archive which I find is easer to understand & work with.

Restoring a workspace at accurev

I've foolishly deleted the content of a workspace I've worked on.
I wanted to reset it and thought I'll be able to re download it again from accurev, apparently it is more complicated than that...
So I'm pretty much stuck, I have an empty directory as a workspace, any way to fix that?
I can see the stream I want to re download in the GUI.
I've already opened a workspace for it in the past so I can see I'm connected to it.
Any way to reset this workspace?
Thanks.
Via the command line, from the top of your workspace you can run "accurev pop -O -R ." << don't forget the dot. This will repopulate your workspace with all the files in the backing stream. The files brought into your workspace will be from the time that you ran an AccuRev update. The -O is for Over Write and the -R is Recursive
Via the GUI, select the top most directory, right click and select Populate. In the pop-up dialog box select Overwrite and Recursive.
Any files that you had modified, but not kept will not be restored.
Any file that are active in your workspace WILL be over written.
You might want to run an AccuRev update after the re-populate command.