I have an application running on Apache 2.4.33 and we are trying to test the sql injection vulnerability for this application using the SQLMap command as below:
sqlmap -u 'http://hostip/appurl?query_type=something&element=*' -D check -T configuration --dump
The application is runnig on ssl 443 port and when the above url is hit through browser it gets redirected to https, similar thing happens in the above command and we see the following:
[00:02:01] [INFO] testing connection to the target URL
sqlmap got a 302 redirect to 'https://hostip/appurl?query_type=something&element=. Do you want to follow? [Y/n] n
and this utility works properly.
sqlmap -u 'https://hostip/appurl?query_type=something&element=' -D check -T configuration --dump
However when we directly try to do the sqlmap command on the url which it was getting redirected we get 500 internal server error.
The Apache access log show:
10.21.12.170 - - [03/Jul/2018:12:51:12 +0530] "GET https://hostip/appurl?query_type=something&element= HTTP/1.1" 500 -
Related
I have a script that sends POST requests to Apache load balancer to change status_D parameter of the specified worker. This is supposed to enable or disable worker (0 - enable, 1 - disable).
This used to work, but not anymore. Script is in Perl, but I tried sending the same request using curl, same result - status does not change.
If I open load balancer web page in browser and change it from there - it works.
I even captured browser's POST request parameters from the Apache log, copied and pasted them into curl command, but it still did not work, which makes me think that parameters are fine, but perhaps something has changed in Apache or proxy_balancer_module recently? Apache version is 2.4.52.0.1.
In new versions you need to add the referer in the http request.
curl -s -o /dev/null -XPOST "http://${server}:${port}/${manager}?" \
-H "Referer: http://${server}:${port}/${manager}?b=${balancer}&w=${worker}&nonce=${nonce}" -d b="${balancer}" \
-d w="${worker}" -d nonce="${nonce}" -d w_status_D=1
I am referring this link https://miki725.github.io/docker/crypto/2017/01/29/docker+nginx+letsencrypt.html
to enable SSL on my app which is running along with docker. So the problem here is when I run the below command
docker run -it --rm \
-v certs:/etc/letsencrypt \
-v certs-data:/data/letsencrypt \
deliverous/certbot \
certonly \
--webroot --webroot-path=/data/letsencrypt \
-d api.mydomain.com
It throws an error:
Failed authorization procedure. api.mydomain.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://api.mydomain.com/.well-known/acme-challenge/OCy4HSmhDwb2dtBEjZ9vP3HgjVXDPeghSAdqMFOFqMw:
So can any one please help me and let me know if I am missing something or doing something wrong.
What seems to be missing from that article and possibly from your setup is that the hostname api.mydomain.com needs to have a public DNS record pointing to the IP address of the machine on which the Nginx container is running.
The Let's Encrypt process is trying to access the file api.mydomain.com/.well-known/acme-challenge/OCy4HSmhDwb2dtBEjZ9vP3HgjVXDPeghSAdqMFOFqMw. This file is put there by certbot. If the address api.mydomain.com does not resolve to the address of the machine from which you are running certbot then the process will fail.
You will also need to have ports 80 and 443 open for it to work.
Based on the available info that is my best suggestion on where you can start looking to resolve the issue.
I'm currently struggling to get graylog working over https in a docker environment. I'm using the jwilder/nginx-proxy and I have the certificates in place.
When I run:
docker run --name=graylog-prod --link mongo-prod:mongo --link elastic-prod:elasticsearch -e VIRTUAL_PORT=9000 -e VIRTUAL_HOST=test.myserver.com -e GRAYLOG_WEB_ENDPOINT_URI="http://test.myserver.com/api" -e GRAYLOG_PASSWORD_SECRET=somepasswordpepper -e GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 -d graylog2/server
I get the following error:
We are experiencing problems connecting to the Graylog server running
on http://test.myserver.com:9000/api. Please verify that the server is
healthy and working correctly.
You will be automatically redirected to the previous page once we can
connect to the server.
This is the last response we received from the server:
Error message
Bad request Original Request
GET http://test.myserver.com/api/system/sessions Status code
undefined Full error message
Error: Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
When I go to the URL in the message, I get a reply: {"session_id":null,"username":null,"is_valid":false}
This is the same reply I get when running Graylog without https.
In the docker log file from the graylog is nothing mentioned.
docker ps:
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES 56c9b3b4fc74 graylog2/server "/docker-entrypoint.s" 5
minutes ago Up 5 minutes 9000/tcp, 12900/tcp
graylog-prod
When running docker with the option -p 9000:9000 all is working fine without https, but as soon as I force it to go over https I get this error.
Anyone an idea what I'm doing wrong here?
Thanks a lot!
Did you try GRAYLOG_WEB_ENDPOINT_URI="https://test.myserver.com/api" ?
I am trying to setup an SSH tunnel but I am new to this process. This is my setup:
Machine B has a web service with restricted access. Machine A has been granted access to Machine B's service, based on a firewall IP whitelist.
I can connect to Machine A using an ssh connection. After that I try to access the webservice on Machine B from my localhost, but I cannot.
The webservice endpoint looks like this:
service.test.organization.com:443/org/v1/sendData
So far, I have created an ssh tunnel like this:
ssh -L 1234:service.test.organization.com:443 myuser#machineb.com
My understanding was that using this approach, I could hit localhost:1234 and it would be forwarded to service.test.organization.com:443, through Machine B.
I have confirmed that from Machine B, I can execute a curl command to send a message to the webservice, and i get a response (so that is working). I have tried using PostMan in my browser, and curl in terminal from localhost, but I have been unsuccessful. (curl -X POST -d #test.xml localhost:1234/org/v1/sendData)
Error message: curl: (52) Empty reply from server
There's a lot of material on SSH and I am sifting through it, but if anyone has any pointers, I would really appreciate it!
Try to add Host HTTP header: curl -H "Host: service.test.organization.com" -X POST -d #test.xml http://localhost:1234/org/v1/sendData
The networking issue was caused by the request format. My request object was built with a destination of 'localhost:1234'. So even though it was reaching the proper machine, the machine ignored it.
To solve this I added a record in my host file, like this:
service.test.organization.com 127.0.0.1
Then I was able send the message. First I opened the tunnel,
ssh -L 443:service.test.organization.com:443 myuser#machineb.com,
Then using using this curl command: curl -X POST -d #test.xml service.test.organization.com:443/org/v1/sendData
The host file causes the address to resolve to localhost, then the ssh tunnel knows to forward it on.
I am using Jmeter to hit an https url which is behind firewall and i am using a proxy.
When I am using wget, I get thhis error:
--2014-12-12 16:14:49-- https://xxx.company.com/
Resolving proxy.net... xx.xx.xx.xx
Connecting to proxy.net|xx.xx.xx.xx|:80... connected.
ERROR: certificate common name "yyy.company.COM" doesn't match requested host name "xxx.company.com".
To connect to xxx.company.com insecurely, use '--no-check-certificate'.
After using the --no-check-certificate option wget is working fine.
But when I'm running wget through Jmeter,I am getting connection timed out error. Can anybody help me, how can i use --no-check-certificate option in Jmeter.
I am using Jmeter 2.9.
By default JMeter does not validate certificates so your issue is not what you think it is.
If wget works on the same machine where you get the timeout in JMeter, check that JMeter uses the same proxy configuration:
See this:
http://jmeter.apache.org/usermanual/get-started.html#proxy_server
echo "check_certificate = off" >> ~/.wgetrc