Block sqlmap injection request via Iptalbes/firewall - block

I was checking the security on my server RHEL-5. I was running sqlmap and its gone through smoothly via iptables firewall.
I want to block the sqlmap injection via firewall. So If anybody will try the same it sud be block in firewall.

That might not be possible to do.
You could block any attempts to connect that use a user-agent containing the string "sqlmap". However, sqlmap comes with the --random-agent option which would make it appear just like any other browser.
If you are running a web server, it would be very difficult to differentiate between sqlmap and legitimate requests.
What you need is an IDS, or mod_security (if you use apache).

Related

If I change web hosting and re-point my domain to it, can it still read secure cookies from the previous server? [duplicate]

I have two HTTP services running on one machine. I just want to know if they share their cookies or whether the browser distinguishes between the two server sockets.
The current cookie specification is RFC 6265, which replaces RFC 2109 and RFC 2965 (both RFCs are now marked as "Historic") and formalizes the syntax for real-world usages of cookies. It clearly states:
Introduction
...
For historical reasons, cookies contain a number of security and privacy infelicities. For example, a server can indicate that a given cookie is intended for "secure" connections, but the Secure attribute does not provide integrity in the presence of an active network attacker. Similarly, cookies for a given host are shared across all the ports on that host, even though the usual "same-origin policy" used by web browsers isolates content retrieved via different ports.
And also:
8.5. Weak Confidentiality
Cookies do not provide isolation by port. If a cookie is readable by a service running on one port, the cookie is also readable by a service running on another port of the same server. If a cookie is writable by a service on one port, the cookie is also writable by a service running on another port of the same server. For this reason, servers SHOULD NOT both run mutually distrusting services on different ports of the same host and use cookies to store security sensitive information.
According to RFC2965 3.3.1 (which might or might not be followed by browsers), unless the port is explicitly specified via the port parameter of the Set-Cookie header, cookies might or might not be sent to any port.
Google's Browser Security Handbook says: by default, cookie scope is limited to all URLs on the current host name - and not bound to port or protocol information. and some lines later There is no way to limit cookies to a single DNS name only [...] likewise, there is no way to limit them to a specific port. (Also, keep in mind, that IE does not factor port numbers into its same-origin policy at all.)
So it does not seem to be safe to rely on any well-defined behavior here.
This is a really old question but I thought I would add a workaround I used.
I have two services running on my laptop (one on port 3000 and the other on 4000).
When I would jump between (http://localhost:3000 and http://localhost:4000), Chrome would pass in the same cookie, each service would not understand the cookie and generate a new one.
I found that if I accessed http://localhost:3000 and http://127.0.0.1:4000, the problem went away since Chrome kept a cookie for localhost and one for 127.0.0.1.
Again, noone may care at this point but it was easy and helpful to my situation.
This is a big gray area in cookie SOP (Same Origin Policy).
Theoretically, you can specify port number in the domain and the cookie will not be shared. In practice, this doesn't work with several browsers and you will run into other issues. So this is only feasible if your sites are not for general public and you can control what browsers to use.
The better approach is to get 2 domain names for the same IP and not relying on port numbers for cookies.
An alternative way to go around the problem, is to make the name of the session cookie be port related. For example:
mysession8080 for the server running on port 8080
mysession8000 for the server running on port 8000
Your code could access the webserver configuration to find out which port your server uses, and name the cookie accordingly.
Keep in mind that your application will receive both cookies, and you need to request the one that corresponds to your port.
There is no need to have the exact port number in the cookie name, but this is more convenient.
In general, the cookie name could encode any other parameter specific to the server instance you use, so it can be decoded by the right context.
In IE 8, cookies (verified only against localhost) are shared between ports. In FF 10, they are not.
I've posted this answer so that readers will have at least one concrete option for testing each scenario.
I was experiencing a similar problem running (and trying to debug) two different Django applications on the same machine.
I was running them with these commands:
./manage.py runserver 8000
./manage.py runserver 8001
When I did login in the first one and then in the second one I always got logged out the first one and viceversa.
I added this on my /etc/hosts
127.0.0.1 app1
127.0.0.1 app2
Then I started the two apps with these commands:
./manage.py runserver app1:8000
./manage.py runserver app2:8001
Problem solved :)
It's optional.
The port may be specified so cookies can be port specific. It's not necessary, the web server / application must care of this.
Source: German Wikipedia article, RFC2109, Chapter 4.3.1

Apache Reverse Proxy Using a Network Proxy Credential?

I'm trying to set up a reverse proxy on Apache 2.2 (Windows). I am able to do it on a non-corporate network without any problems. I am attempting to reverse proxy content from a vendor domain, but keep it under my own domain for SEO reasons.
dev.example.com/stuff ===> devstuff.vendor.com
However, when I try to incorporate this on my internal network, the Internet Gateway proxy is blocking the request, presumably as I'm not properly authenticating the call to the external domain.
dev.example.com ===> Internet Proxy =X=> devstuff.vendor.com
I've been googling every term I can think of and reading the Apache docs and can't find anything which seems to work. I have tried running Apache as a service with a network account which would have access, but naturally, it's probably not trying to use the proxy at all.
Is there any way to tell Apache to send external ProxyPass requests to use a specific proxy server, and perhaps a specific username/password as well? I'd love to avoid modifying the proxy or firewall too heavily to accomplish this.
Thanks!
Never quite did figure out the "with passing credentials" part, but using the ProxyRemote directive, we could pass everything for our devstuff.vendor.com domain through our network proxy. From there, we had a proxy exception put in to allow from our web server IPs without authentication, since this was an approved arrangement anyhow.
Though, in hindsight, even after solving this, we ended up backing up one step further and just going straight out the firewall for performance reasons (both for the end user with too many hops) as well as negative impacts to our proxy server.

NTLM authentication and smartcards

I'm running a program (Mathematica) in a VMWare VPC behind a corporate internet proxy. Various programs installed in that VPC like IE, Chrome, Excel, Word, Acrobat Reader, and even MS Paint get data from the Internet without problems, but Mathematica doesn't seem to handle the proxy correctly.
My guess is it's not able to handle the proxy's NTLM authentication.
In an earlier situation, behind a different firewall, I had some success with CNTLM as an intermediate between Mathematica and the proxy. CNTLM talks to the proxy and takes care of the NTLM authentication, and Mathematica is given the port CNTLM listens to and ip address (localhost), to talk to. However, in that earlier case I knew the credentials to be used for the proxy (i.e., my own).
In the current situation, my logon takes place using a smartcard and a PIN. The VPC gets credentials passed transparently (I don't have to enter them) and apparently all the programs I mentioned above automagically know about them. This makes me think Mathematica or CNTLM should be able to do this as well. However, my PIN used as password doesn't work (in fact, I get locked out if I try too often). I assume that the credentials used are in fact not my own but are either the windows password (that I don't have as smartcard user) or are derived from my PIN and smartcard.
My question is: how can I make this setup work? This may involve CNTLM, but other solutions are welcome as well.
You could have a chance by using a browser proxy such as Fiddler
Like CNTLM also Fiddler act as a local proxy and allow applications that support proxy, but do not support NTLM (they support a “plain” proxy) to use the corporate proxy not directly but through a local proxy.
Unlike CNTLM , Fiddler doesn't require to configure the credentials but it uses the current user crediatials to authenticate the web requests.
I Can't be sure that this is the solution for you , since I haven't an enviroment like your, but this workaround works in some other cases as reported in this
answer about ruby gem
or/and this blog about mercurial so I hope this can work with Mathematica too.
Note: Once you run Fiddler it automatically configure the browser proxy to itself ( http://localhost:8888 ) therefore you can leave the proxy settings of your application to "Use Proxy Settings from My System or Browser". By the way Fiddler it's not only a local proxy and could be used also to troubleshooting or debugging, the feature list is available in here

Connection Reuse with Curl, Apache and mod_wsgi

I am deploying a mod_wsgi application on top of Apache, and have a client program that uses Curl.
On the CURL api on the user side, I have it attempt to reuse connection, but looking at the connections from wireshark, I see that for every HTTP request/response, a new connection is made.
At the end of every HTTP request, the HTTP response header has "Connection: Close"
Is this the same as Keep-Alive? What do I need to do on the Apache/Mod_wsgi side to enable connection re-use?
You would not generally need to do anything to Apache as support for keep alive connections would normally be on by default. Look at the KeepAlive directive in Apache configuration to work out what it is set to.
On top of that, for keep alive connections to work the WSGI application must be setting a content length in the response, or returning a list for the response where the list contains only a single string. In this latter case mod_wsgi will automatically add a content length for the response. The response would generally also need to be a successful response as most error responses would cause connection to be closed regardless.
Even having done all that, the issue is whether the ability of curl to fetch multiple URLs even makes use of keep alive connections. Obviously separate invocations of curl will not be able to, so that you are even asking this questions suggests you are trying to use that feature of curl. Only other option would be if you were using a custom client linked to libcurl and using its library and so you meant libcurl.
Do note that if access to Apache is via a proxy, the proxy may not implement keep alive and so stop the whole mechanism from working.
To give more information, need to know about how you are using curl.

How to put up an off-the-shelf https to http gateway?

I have an HTTP server which is in our internal network and accessible only from inside it. I would like to put another server that would listen to an HTTPS port accessible from outside, and forward the requests to that HTTP server (and send back the responses via HTTPS). I know that there are several ways to do this with some programming involved (and I myself made a temporary solution with Tomcat and a very simple servlet I wrote), but is there a way to do the same just plugging parts already made (like Apache + modules)?
This is the sort of use-case that stunnel is designed for. There is a specific example of using stunnel to wrap an HTTP server.
You should consider whether this is really a good idea, though. Web applications designed for use inside a corporate firewall are often fairly lax about security. Merely encrypting the connections prevents casual eavesdropping, but does not secure the site. If an attacker finds your outward facing server and starts connecting to it, they can still try to find exploitable flaws in the web service (SQL injection, cross-site scripting, etc).
With Apache look into mod_proxy.
Apache 2.2 mod_proxy docs
Apache 2.0 mod_proxy docs