httxt2dbm creates Perl files instead of .dbm file. Apache server does not "want" to work with those files - apache

I tried to convert a simple mymap.txt file to mymap.dbm file to use in RewriteMap for RewriteRule. mymap.txt file is tested with and works perfectly. I was converting it with httxt2dbm apache functionality:
httxt2dbm -i mymap.txt -o mymap.dbm
And instead of creating one mymap.dbm file it creates two files looking like this:
mymap.dbm.dir (0 kb)
mymap.dbm.pag (1 kb / 1024b)
RewriteMap does not "want" to work with any of those files. I tried to rename mymap.dbm.pag to mymap.dbm and work with it. Did not work either.
Line from httpd.conf:
RewriteMap somemap "txt:C:\xampp\htdocs\htaccessTest1/mymap.dbm"
mymap.txt looks like this:
k1 http://localhost/htaccessTest1/keyw1.html
k2 http://localhost/htaccessTest1/keyw2.html
k3 http://localhost/htaccessTest1/keyw3.html
k4 http://localhost/htaccessTest1/keyw4.html
With .txt map I had zero problems.
How to force it to work?
Update1:
I tried force it to output DBM:
httxt2dbm -f DBM -i mymap.txt -o mymap.dbm
Error appeared:
Error: The requested DBM Format 'DBM' is not available.
How is this possible if that is what it meant to be?
Update2:
.pag file with following .dir files are Perl files.
But why does httxt2dbm create those files instead of .map file?

RewriteMap somemap "txt:C:\xampp\htdocs\htaccessTest1/mymap.dbm"
Notice the txt.
From the documentation:
txt
A plain text file containing space-separated key-value pairs, one per line. (Details ...)
...
dbm
Looks up an entry in a dbm file containing name, value pairs. Hash is constructed from a plain text file format using the httxt2dbm utility. (Details ...)
So if you're using a dbm map, you need to tell Apache that:
RewriteMap somemap "dbm:C:\xampp\htdocs\htaccessTest1/mymap.dbm"
Many dbm implementations use two files for storing data: a .dir file storing the hash table used for looking up keys, and a .pag file with the values. That part is normal. More documentation:
Note that with some dbm types, more than one file is generated, with a common base name. For example, you may have two files named mapfile.map.dir and mapfile.map.pag. This is normal, and you need only use the base name mapfile.map in your RewriteMap directive.

Related

I want selected textfiles to act as binary files when merging

For reason I want to be able to merge a text file without risking a merge. Is it possible to treat a text file as a binary file in this case? It is important that no one do this merging by misstake
Using .gitattributes, you should be able to mark a file as binary:
path/to/my/file binary
But that would prevent even to see a diff, considering binary is a pre-defined macro.
So use instead:
path/to/my/file -merge -text
That will work only for branches done after adding that .gitattributes file.

Behave framework .ini file usage

In my .ini file I have
[behave]
format=rerun
outfiles=rerun_failing.features
So I want to use "rerun_failing.features" file for storing scenarios that fail.
However when I run '--steps-catalog' command, it also stores that catalog to the same file. Why is that?
How to make set up two separate files for commands '--rerun' and '--steps-catalog'?
Thanks!
Use behave --dry-run -f steps.catalog ... instead. The output of the steps.catalog formatter is written to stdout, not the "rerun-outputfile".

Recursive rsync over ssh, include only one file extension

I'm trying to rsync files over ssh from a server to my machine. Files are in various subdirectories, but I only want to keep the ones that match a certain pattern (IE blah.txt). I have done extensive googling and searching on stackoverflow, and I've tried just about every permutation of --include and --excludes that have been suggested. No matter what I try, rsync grabs all files.
Just as an example of one of my attempts, I have used:
rsync -avze 'ssh' --include='*blah*.txt' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
To troubleshoot, I tried this command:
rsync -avze 'ssh' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
expecting it to not copy anything, but it still grabbed all of the files.
I am using rsync version 2.6.9 on OSX.
Is there something obvious I'm missing? I've been struggling with this for quite a while.
I was able to find a solution, with a caveat. Here is the working command:
rsync -vre 'ssh' --prune-empty-dirs --include='*/' --include='*blah*.txt' --exclude='*' user#server.com:/path/to/server/files /path/to/local/files
However! If I type this into my command line directly, it works. If I save it to a file, myfile.txt, and I try `cat myfile.txt` it no longer works! This makes no sense to me.
OSX follows BSD style rsync
https://www.freebsd.org/cgi/man.cgi?query=rsync&apropos=0&sektion=0&manpath=FreeBSD+8.0-RELEASE+and+Ports&format=html
-C, --cvs-exclude
This is a useful shorthand for excluding a broad range of files
that you often don't want to transfer between systems. It uses a
similar algorithm to CVS to determine if a file should be
ignored.
The exclude list is initialized to exclude the following items
(these initial items are marked as perishable -- see the FILTER
RULES section):
RCS SCCS CVS CVS.adm RCSLOG cvslog.* tags TAGS
.make.state .nse_depinfo *~ #* .#* ,* _$* *$ *.old *.bak
*.BAK *.orig *.rej .del-* *.a *.olb *.o *.obj *.so *.exe
*.Z *.elc *.ln core .svn/ .git/ .bzr/
then, files listed in a $HOME/.cvsignore are added to the list
and any files listed in the CVSIGNORE environment variable (all
cvsignore names are delimited by whitespace).
Finally, any file is ignored if it is in the same directory as a
.cvsignore file and matches one of the patterns listed therein.
Unlike rsync's filter/exclude files, these patterns are split on
whitespace. See the cvs(1) manual for more information.
If you're combining -C with your own --filter rules, you should
note that these CVS excludes are appended at the end of your own
rules, regardless of where the -C was placed on the command-
line. This makes them a lower priority than any rules you spec-
ified explicitly. If you want to control where these CVS
excludes get inserted into your filter rules, you should omit
the -C as a command-line option and use a combination of --fil-
ter=:C and --filter=-C (either on your command-line or by
putting the ":C" and "-C" rules into a filter file with your
other rules). The first option turns on the per-directory scan-
ning for the .cvsignore file. The second option does a one-time
import of the CVS excludes mentioned above.
-f, --filter=RULE
This option allows you to add rules to selectively exclude cer-
tain files from the list of files to be transferred. This is
most useful in combination with a recursive transfer.
You may use as many --filter options on the command line as you
like to build up the list of files to exclude. If the filter
contains whitespace, be sure to quote it so that the shell gives
the rule to rsync as a single argument. The text below also
mentions that you can use an underscore to replace the space
that separates a rule from its arg.
See the FILTER RULES section for detailed information on this
option.

How to Input Redirect Two Files to Standard Input?

Is it possible to redirect two or more files to standard input in one command? For example
$ myProgram < file1 < file 2
I tried that command however, it seemed like the OS is only taking the first file and ignoring the other...
If not, how can I achieve that?
NOTE: concatenating the two files will not help in my case.
When you do this from bash, it isn't inputting multiple files to standard input, it is called Process Substitution
The output is sent to an file descriptor under /dev/fd/<n> for each substitution

gzip several files and pipe them into one input

I have this program that takes one argument for the source file and then it parse it. I have several files gzipped that I would like to parse, but since it only takes one input, I'm wondering if there is a way to create one huge file using gzip and then pipe it into the only one input.
Use zcat - you can provide it with multiple input files, and it will de-gzip them and then concatenate them just like cat would. If your parser supports piped input into stdin, you can just pipe it directly; otherwise, you can just redirect the output to a file and then invoke your parser program on that file.
If the program actually expects a gzip'd file, then just pipe the output from zcat to gzip to recompress the combined file into a single gzip'd archive.
http://www.mkssoftware.com/docs/man1/zcat.1.asp