bzr ignore for all executables files under Linux - bazaar

Is there a way to make bazaar to ignore all executable files under Linux? They don't have a particular extension, so I'm not able to accomplish this with regexp.
Thank you very much.

If all your executables were under a certain directory, you can ignore the directory content (eg. bzr ignore "mybindir/*"). I realize this isn't exactly what you want, but other than bialix's work around I don't think there is a better answer at the moment. It might be possible in future to add a keyword like EXECUTABLE: to the .bzrignore file which will indicate what you need. Even better would be to be able to chain them eg. EXECUTABLE:RE:someprefix.+ .

According to bzr ignore -h there is no pattern to select executable files.
But you can ignore them one by one.
find . -type f -perm /111 -print0 | xargs -0 bzr ignore

You can ignore all files without extension with following regex: RE:\.?[^.]+ but it will also ignore all directories and symlinks those don't have "extension", i.e. anything after dot.
Sometimes it's undesirable, so if you don't have a lot of executable files you'd better ignore them by name.

Related

gsutil rsync only files matching a pattern

I need to rsync files from a bucket to a local machine everyday, and the bucket contains 20k files. I need to download only the changed files that end with *some_naming_convention.csv .
What's the best way to do that? using a wildcard in the download source gave me an error.
I don't think you can do that with Rsynch. As Christopher told you, you can skip files by using the "-x" flag, but no just synch those [1]. I created a public Feature Request on your behalf [2] for you to follow updates there.
As I say in the FR, IMHO I consider this to not follow the purpose of rsynch, as it's to keep folders/buckets synchronise, and just synchronising some of them don't fall in that purpose.
There is a possible "workaround" by using gsutil cp to copy files and -n to skip the ones that already exist. The whole command for your case should be:
gsutil -m cp -n <bucket>/*some_naming_convention.csv <directory>
Other option, maybe a little bit more far-fetched is to copy/move those files to a folder and then use that folder to rsynch.
I hope this works for you ;)
Original Answer
From here, you can do something like gsutil rsync -r -x '^(?!.*\.json$).*' gs://mybucket mydir to rsync all json files. The key is the ?! prefix to the pattern you actually want.
Edit
The -x flag excludes a pattern. The pattern ^(?!.*\.json$).* uses negative look-ahead to specify patterns not ending in .json. It follows that the result of the gsutil rsync call will get all files which end in .json.
Rsync lets you include and exclude files matching patterns.
For each file rsync applies the first patch that matches, some if you want to sync only selected files then you need to include those, and then exclude everything else.
Add the following to your rsync options:
--include='*some_naming_convention.csv' --exclude='*'
That's enough if all your files are in one directory. If you also want to search sub folders then you need a little bit more:
--include='*/' --include='*some_naming_convention.csv' --exclude='*'
This will duplicate all the directory tree, but only copy the files you want. If that leaves empty directories you don't want then add --prune-empty-dirs.

zsh survive glob failure

I have checked out a project, the project contains a .gitignore file.
The contents of this file are like so
vx1% head .gitignore
./deps
/*.exe
/*.la
/.auto
.libs
ChangeLog
Makefile
Makefile.in
aclocal.m4
autom4te.cache
I want to
read the file line by line
for each line read, list the actual files that are being ignored
finally I want to tweak the script to delete those files
The reason for wanting to do this - is that I don't trust the project Makefile to fully clean up it's generated files.
Notes
As you can see, the .gitignore uses some globs that I need to modify before running the commands, otherwise the glob will resolve to my root directory.
What I already know
To dynamically evaluate an arbitrary string as a glob pattern
DYN="*.c"
print $~DYN
To strip the leading /, if it exists
DYN="/*.c"
print ${~DYN/#//}
What I've got
cat .gitignore | while read i; do echo $i ; print ${~i/#//} ; done
The problem
The first glob failure that this loop encounters, it terminates with error
zsh: no matches found: *.exe
What I want
The 'script' should keep going through each line of the file, trying each line in turn.
I answered this myself, answer is below
Found the answer on the zsh mailing list, in typical Zsh fashion - it's very simple when you know how, and impossible otherwise.
print *.nosuchextension(N)
The (N) glob parameter prevents raising an error on match failure.

Problem with multiple listings of the same file in RPM spec

I have some problems with an rpm spec file that is listing the same file multiple times. For this spec we do some normal compilation and then we have script that copies everything to the buildroot. Within this buildroot we have a lot of generic scripts that need to be installed on the final system, so we just list this directory.
However the problem is, that one of the scripts might be changed and configuration options might be changed within the script. So we list this script with different attributes as %config. However this means the script is defined multiple times with conflicting attributes, so rpmbuild complains and does not include the script at all in the installation package.
Is there a good way to handle this problem and to tell rpmbuild to only use the second definition, or do we have to seperate the script into two parts, one containing the configuration and one containing the actual logic?
Instead of specifying the directory, you can create a file list and then prune duplicate files from that.
So where you have something like
%files
%dir foo
%config foo/scriptname
You modify those parts to
find $RPM_BUILD_ROOT -type f | sed -e "s|^$RPM_BUILD_ROOT||" > filelist
sed -i "\|^foo/scriptname$|d" filelist
%files -f filelist
%config foo/scriptname
You can also use %{buildroot} in place of $RPM_BUILD_ROOT.

Creating RPM from current directory

I'm trying to create an rpm from local source. Is it possible to do compilation in a similar to what pdebuild does - just copy the local directory as the source and operate on that copy? Every time I do rpmbuild -ba ... it tries to unpack some archive in RPMBUILD/SOURCE, but I don't want to go that way.
Essentially I'd like to be able to just checkout the repository with the code, do rpmbuild -ba application.spec in that checkout directory and have it do the right thing... Is that possible?
I don't know about a better way than the following:
rpmbuild -bs my_file.spec --define "_sourcedir $PWD"
For my own project, I'm always using a Makefile to define make rpm once and then just use it.
The %setup macro in the spec file is what unpacks the source. Remove that, and your problem goes away, and you can do anything you'd like to fetch the source.

Programmatically reading contents of /etc

I want to programmatically read the contents of the /etc directory. If possible please reply with the code to achieve this.
/etc directory is a usual directory. Work with it as you usually do with any other one.
This is a simple application of opendir() and readdir() functions in C/C++ or their equivalents in Python, Perl or PHP. You will be able to see only files you have access to. It would help if you could explain what you want to accomplish.
The files in /etc are just ordinary files - you read them as you would any other files.
Understanding them on the other hand is more difficult - each file can have its own syntax, let alone attaching any meaning to the options within.
There is no special API for accessing the files in /etc.