I've been tasked with moving an old dynamic website from a Windows server to Linux. The site was initially written with no regard to character case. Some filenames were all upper-case, some lower-case, and some mixed. This was never a problem in Windows, of course, but now we're moving to a case-sensitive file system.
A with a quick find/rename command (thanks to another tutorial) got the filenames to all lowercase.
However, many of the URL references in the code still point to these mixed-case filenames, so I enabled mod_speling to overcome this issue. It seems to work OK for the most part, with the exception of one page: I have a file name haematobium.html, which, everytime a link points to .../haematobium.html, it gets rewritten as .../hæmatobium.html in the browser.
I don't know how this strange character made its way into the filename in the first place, but I've corrected the code in the HTML document to now link to haematobium.html, then renamed the haematobium.html file itself to match.
When requesting .../haematobium.html in Chrome, it "corrects" to .../hæmatobium.html in the address bar, and shows an error saying "The requested URL .../hæmatobium.html was not found on this server."
In IE9, I'm promted for the login (this is a .htaccess protected page), I enter it, and then if forwards the URL to .../h%C3%A6matobium.html, which again doesn't load.
In my frustration I even copied haematobium.html to both hæmatobium.html and hæmatobium.html, still, none of the three pages actually load.
So my question: I read somewhere that mod_speling tries to "learn" misspelled URLs. Does it actually rename files (is that where the odd character might have come from)? Does it keep a cache of what's been called for, and what it was forwarded to (a cache I could clear)?
PS. there are also many mixed-case references to MySQL database tables and fields, but that's a whole 'nother nightmare.
[Cannot comment yet, therefore answering.]
Your question doesn't make it entirely clear which of the two names (two characters ae [ASCII], or one ligature character æ [Unicode]) for haematobium.html actually exists in your Apache's file system.
Try the following in your shell:
$ echo -n h*matobium.html | hd
The output should be either one of the following two alternatives. This is ASCII, with 61 and 65 for a and e, respectively:
00000000 68 61 65 6d 61 74 6f 62 69 75 6d 2e 68 74 6d 6c |haematobium.html|
00000010
And this is Unicode, with c3 a6 for the single character æ:
00000000 68 c3 a6 6d 61 74 6f 62 69 75 6d 2e 68 74 6d 6c |h..matobium.html|
00000010
I would recommend using the ASCII version, it makes life considerably easier.
Now to your actual question. mod_speling does neither "learn", nor rename or cache its data. The caching is either done by your browsers, or by proxies in between your browsers and the server.
It's actually best practice to test these cases with command line tools like wget or curl, which should be already available or easily installable on any Linux.
Use wget -S or curl -i to actually see the response headers sent by your web server.
Related
Because of md5 hash scanning tools like wpscan, I want to prevent the script kiddies to detect as much information as possible about my wordpress site. With to following perl snippet, I m trying to add some extra characters to all requested png files. But it does not work and I don't know why. Does somebody can help me out?
My goal is not to change it right inside the files - just for requested output on screen.
ExtFilterDefine pngfilter mode=output intype=image/png cmd="/usr/bin/perl -pe 'END { unless (-f q{/tmp/md5_filter.tmp}) { print qq(\/*) . time() . qq(\*/) } }'"
I use the same snippet logic for css and js files. Here it works as expected.
It does work.
$ perl -pe 'END { print qq(/*) . time() . qq(*/) }' derpkin.png >derpkin_.png
$ diff <( hexdump -C derpkin.png ) <( hexdump -C derpkin_.png )
3023,3024c3023,3025
< 0000bce0 00 00 00 00 49 45 4e 44 ae 42 60 82 |....IEND.B`.|
< 0000bcec
---
> 0000bce0 00 00 00 00 49 45 4e 44 ae 42 60 82 2f 2a 31 36 |....IEND.B`./*16|
> 0000bcf0 35 36 33 35 30 37 37 36 2a 2f |56350776*/|
> 0000bcfa
At least, it works in the sense that it does exactly what you wanted it to do. But does it makes sense to add arbitrary text to the end of a PNG? I'm not familiar enough withe PNG file format to answer that.
Caveat: It will not work on Windows because of CRLF ⇔ LF translation.
I have a Gzip file that has multiple blocks.Every block starts with
1F 8B 08
And ends with
00 00 FF FF
I tried to decompress the file using 7-Zip and gzip tool in linux ,But I always get an error saying that the file is invalid.
So I wrote this python script
import zlib
CHUNKSIZE=1
f=open("file.gz","rb")
buffer=f.read(CHUNKSIZE)
data=""
r=CHUNKSIZE
d = zlib.decompressobj(16+zlib.MAX_WBITS)
while buffer:
outstr = d.decompress(buffer)
print(r)
buffer=f.read(CHUNKSIZE)
r=r+CHUNKSIZE
outstr = d.flush()
I have notice that when it reach to the header of the second block
00 00 00 FF FF 1F 8B 08
at the point between FF and 1F
the script return
zlib.error: Error -3 while decompressing data: invalid block type
I made the size of the chunk to be 1 so the I would know exactly where the problem is.
I know that the problem is not in the file because I have multiple files constructed the same way and they show exactly the same error.
I know that the problem is not in the file because I have multiple
files constructed the same way and they show exactly the same error.
The conclusion is not that the problem is not in the file, but rather that the problem is in all of your files. Someone either inadvertently or deliberately constructed invalid gzip files. It looks like they did that by using Z_SYNC_FLUSH or Z_FULL_FLUSH instead of Z_FINISH to end each stream before starting another faux gzip stream. A gzip stream ends with a last block followed by an eight-byte gzip trailer containing two check values on the integrity of the uncompressed data.
You can nevertheless continue with decompression, though without the comfort of any integrity checking of the data, by simply picking up with a new instance of decompressobj when you get an error and see a new gzip header, 1f 8b 08.
More importantly you should locate and contact the source of these files and say "Hey, WTF?"
How can I view executed SQL stored procedures from a memory crash dump of a C# .net application?
The information you want might not be available any more. The objects representing connections, queries and results may already have been garbage collected.
If you know the name of the class that is used for a query, you can use windbg, load the sos extension and use the !dumpheap -type <classname> command to find those objects. Then use !do <address> to display the details of such an object. This might reveal the statement behind it.
If you're looking for a specific query, download the netext extension. It comes with the !wfrom command which allows you to select objects from the .NET heap based on criteria that you define.
This question has no trivial answer and many experienced debuggers had to face this issue one way of another. Getting the information for a single SQL command is tedious, but straightforward. For many it is difficult. I was helping a colleague on a situation like this and put together this solution. It requires NetExt (download the zip file): https://github.com/rodneyviana/netext/tree/master/Binaries
0:067> !windex
Starting indexing at 14:18:54 PM
1000000 objects...
2000000 objects...
3000000 objects...
4000000 objects...
Indexing finished at 14:19:04 PM
399,314,598 Bytes in 4,049,972 Objects
Index took 00:00:10
0:067> .foreach ({$addr} {!wfrom -type *.sqlcommand -nospace -nofield where (_commandType == 4) select _parameters._items._items}) { r #$t1= {$addr} ;!wfrom -type *.sqlcommand -nofield -nospace where (_parameters._items._items==$dbgeval("#$t1")) "\n", _commandText," [",$addr(),"]\n========================"; !wfrom -nospace -nofield -array {$addr} _parameterName,"=",$pp(_value) }
dbo.proc_getSiteIdOfHostHeaderSite [00000001011FCD90]
========================
#RETURN_VALUE=0n0
#HostHeader=sp.contoso.com
#RequestGuid={00000000-0000-0000-0000-000000000000}
proc_GetTpWebMetaDataAndListMetaData [0000000101D98DA0]
========================
#RETURN_VALUE=0n0
#WebSiteId={2815591d-55c8-4caf-842c-101d8807cb2a}
#WebId=NULL
#Url=
#ListId=NULL
#ViewId=NULL
#RunUrlToWebUrl=1
#DGCacheVersion=0n2
#SystemId=69 3a 30 29 2e 77 7c 73 2d 31 2d 35 2d 32 31 2d 31 33 38 35 31 37 34 39 39 32 2d 39 37 39 39 35 i:0).w|s-1-5-21-1385174992-97995 (...more...)
#MetadataFlags=0n18
#ThresholdScopeCount=0n5000
#CurrentFolderUrl=NULL
#RequestGuid={a00a0b30-1fcc-44d6-8346-ec20f8c49304}
proc_EnumLists [000000010236EDF0]
========================
#RETURN_VALUE=0n0
#WebId={a0a099b2-6023-4877-a7c1-378ec68df759}
#Collation=Latin1_General_CI_AS
#BaseType=NULL
#BaseType2=NULL
#BaseType3=NULL
#BaseType4=NULL
#ServerTemplate=NULL
#FMobileDefaultViewUrl=NULL
#FRootFolder=NULL
#ListFlags=0n-1
#FAclInfo=0n1
#Scopes=NULL
#FRecycleBinInfo=NULL
#UserId=NULL
#FGP=NULL
#RequestGuid={a00a0b30-1fcc-44d6-8346-ec20f8c49304}
(...)
I have a PCAP file that was created on a Mac with mergecap that can be parsed on a Mac with Apple's libpcap but cannot be parsed on a Linux system. combined file has an extra 16-byte header that contains 0a 0d 0d 0a 78 00 00 00 before the 4d 3c 2b 1a intro that's common in pcap files. Here is a hex dump:
0000000: 0a0d 0d0a 7800 0000 4d3c 2b1a 0100 0000 ....x...M<+.....
0000010: ffff ffff ffff ffff 0100 4700 4669 6c65 ..........G.File
0000020: 2063 7265 6174 6564 2062 7920 6d65 7267 created by merg
0000030: 696e 673a 200a 4669 6c65 313a 2037 2e70 ing: .File1: 7.p
0000040: 6361 7020 0a46 696c 6532 3a20 362e 7063 cap .File2: 6.pc
0000050: 6170 200a 4669 6c65 333a 2034 2e70 6361 ap .File3: 4.pca
0000060: 7020 0a00 0400 0800 6d65 7267 6563 6170 p ......mergecap
Does anybody know what this is? or how I can read it on a Linux system with libpcap?
I have a PCAP file
No, you don't. You have a pcap-ng file.
that can be parsed on a Mac with Apple's libpcap
libpcap 1.1.0 and later can also read some pcap-ng files (the pcap API only allows a file to have one link-layer header type, one snapshot length, and one byte order, so only pcap-ng files where all sections have the same byte order and all interfaces have the same link-layer header type and snapshot length are supported), and OS X Snow Leopard and later have a libpcap based on 1.1.x, so they can read those files.
(OS X Mountain Lion and later have tweaked libpcap to allow it to write pcap-ng files as well; the -P flag makes tcpdump write out pcap-ng files, with text comments attached to some outgoing packets indicating the process ID and process name of the process that sent them - pcap-ng allows text comments to be attached to packets.)
but cannot be parsed on a Linux system
Your Linux system probably has an older libpcap version. (Note: do not be confused by Debian and Debian derivatives calling the libpcap package "libpcap0.8" - they're not still using libpcap 0.8.)
combined file has an extra 16-byte header that contains 0a 0d 0d 0a 78 00 00 00
A pcap-ng file is a sequence of "blocks" that start with a 4-byte block type and a 4-byte length, both in the byte order of the host that wrote them.
They're divided into "sections", each one beginning with a "Section Header Block" (SHB); the block type for the SHB is 0x0a0d0d0a, which is byte-order-independent (so that you don't have to know the byte order to read the SHB) and contains carriage returns and line feeds (so that if the file is, for example, transferred between a UN*X system and a Windows system by a tool that thinks it's transferring a text file and that "helpfully" tries to fix line endings, the SHB magic number will be damaged and it will be obvious that the file was corrupted by being transferred in that fashion; think of it as the equivalent of a shock indicator).
The 0x78000000 is the length; what follows it is the "byte-order magic" field, which is 0x1A2B3C4D (which is not the same as the 0xA1B2C3D4 magic number for pcap files), and which serves the same purposes as the pcap magic number, namely:
it lets code identify that the file is a pcap-ng file
it lets code determine the byte order of the section.
(No, you don't need to know the length before looking for the pcap magic number; once you've found the magic number, you then check the length to make sure it's at least 28 and, if it's less than or equal to 28, you reject the block as not being valid.)
Does anybody know what this is?
A (little-endian) pcap-ng file.
or how I can read it on a Linux system with libpcap?
Either read it on a Linux system with a newer version of libpcap (which may mean a newer version of whatever distribution you're using, or may just mean doing an update if that will get you a 1.1.0 or later version of libpcap), read it with Wireshark or TShark (which have their own library for reading capture files, which supports the native pcap and pcap-ng formats, as well as a lot of other formats), or download a newer version of libpcap from tcpdump.org, build it, install it, and then build whatever other tools need to read pcap-ng files with that version of libpcap rather than the one that comes with the system.
Newer versions of Wireshark write pcap-ng files by default, including in tools such as mergecap; you can get them to write pcap files with a flag argument of -F pcap.
I do not manage to set and retrieve string with accents in my redis db.
Chars with accents are encoded, how can I retrieve them back as they where set ?
redis> set test téléphone
OK
redis> get test
"t\xc3\xa9l\xc3\xa9phone"
I know this has already been asked
(http://stackoverflow.com/questions/6731450/redis-problem-with-accents-utf-8-encoding) but there is no detailed answer.
The Redis server itself stores all data as a binary objects, so it is not dependent on the encoding. The server will just store what is sent by the client (including UTF-8 chars).
Here are a few experiments:
$ echo téléphone | hexdump -C
00000000 74 c3 a9 6c c3 a9 70 68 6f 6e 65 0a |t..l..phone.|
c3a9 is the representation of the 'é' char.
$ redis-cli
> set t téléphone
OK
> get t
"t\xc3\xa9l\xc3\xa9phone"
Actually the data is correctly stored in the Redis server. However, when it is launched in a terminal, the Redis client interprets the output and applies the sdscatrepr function to transform non printable chars (whose definition is locale dependent, and may be broken for multibyte chars anyway).
A simple workaround is to launch redis-cli with the 'raw' option:
$ redis-cli --raw
> get t
téléphone
Your own application will probably use one of the client libraries rather than redis-cli, so it should not be a problem in practice.