.net core get RemoteIpAddress.MapToIPv4() that is behind cloudflare proxy - asp.net-core

I need to extract the user IP address (v4).
I have the following code to do so:
HttpContext.Connection.RemoteIpAddress.MapToIPv4().ToString();
The problem is that in this case I am getting cloud flare ip address.
How can I get the forwarded v4 IP address?
Thanks

Cloudflare passes all HTTP request headers to your origin web server and adds additional headers.
The header:
CF-Connecting-IP
provides the client IP address connecting to Cloudflare to the origin
web server.
You can also use the header:
X-Forwarded-For
which maintains proxy server and original visitor IP addresses.
For more information about the CloudFlare headers you can refer to the documentation
Actually, you can created a method that tries to check all these headers and return the client IP address value.
private string getRemoteIpAddress(HttpContext accessor) {
// try reading the CloudFlare client IP address header
if (!string.IsNullOrEmpty(accessor.Request.Headers["CF-CONNECTING-IP"]))
return accessor.Request.Headers["CF-CONNECTING-IP"];
// try reading the proxy server and original visitor IP addresses header
var ipAddress = accessor.GetServerVariable("HTTP_X_FORWARDED_FOR");
if (!string.IsNullOrEmpty(ipAddress)) {
var addresses = ipAddress.Split(',');
if (addresses.Length != 0) return addresses.Last();
}
// try reading the remote IpAddress without a proxy
return accessor.Connection.RemoteIpAddress.ToString();
}

Related

SSL redirect changes client IP address read from HTTPResponse

I am using Perfect Framework for my server side application running on an AWS EC2 instance. I am using the following code to get client IP address.
open static func someapi(request: HTTPRequest, _ response: HTTPResponse) {
var clientIP = request.remoteAddress.host }
This was working fine until I installed ssl certificate on my EC2 instance and start redirecting incoming traffic to port 443.
Now this code gives me the ip of my server, i think due to the redirect, Perfect somehow think request comes from itself.
Is there any other method to get client IP address? Or do i have to try something else?
Thanks!
For anyone struggling for the same problem, original client ip can be found in one of the header fields called "xForwardedFor" if there is a redirect, like the following:
var clientIP = request.remoteAddress.host
let forwardInfoResut = request.headers.filter { (item) -> Bool in
item.0 == HTTPRequestHeader.Name.xForwardedFor
}
if let forwardInfo = forwardInfoResut.first {
clientIP = forwardInfo.1
}
Hope this helps somebody, cheers!
Perhaps you should ask the people you are paying for support and whom manage the infrastructure how it works before asking us?
The convention, where an http connection is terminated elsewhere than the server is to inject an x-forwarded-for header. If there is already such a header, the intermediate server injects the client IP address at the front of the list.

JAX RS injected URIInfo returning localhost for REST requests in reverse proxy

I have a set of IBM Websphere Liberty profiles servers inside a HAProxy reverse proxy. Everything works ok but HAProxy is doing something on requests so I can't get the correct URL in the requests using uriInfo.getBaseUri() or uriInfo.getRequestUriBuilder().build("whatever path")... they both return localhost:9080 as host and port, so I can't build correct URLs pointing to the service. (The request is a standard http://api.MYHOST.com/v1/... REST request )
Of course, I get a uriInfo object using #Context in the method so it gets the request information.
Front end configuration:
reqadd X-Forwarded-Proto:\ http
# Add CORS headers when Origin header is present
capture request header origin len 128
http-response add-header Access-Control-Allow-Origin %[capture.req.hdr(0)] if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Methods:\ GET,\ HEAD,\ OPTIONS,\ POST,\ PUT if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Credentials:\ true if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Headers:\ Origin,\ X-Requested-With,\ Content-Type,\ Accept if { capture.req.hdr(0) -m found }
And Back-end configuration is:
option forwardfor
http-request set-header Host api.MYHOST.com
http-request set-header X-Forwarded-Host %[dst]
http-request set-header X-Forwarded-Port %[dst_port]
Any ideas on how to get the real request?
The only way I managed to get correct host used in the request is injecting in the method parameters the HttpServletRequest object.
I also inject the UriInfo, which has all valid information except the host name:
#Context UriInfo uriInfo, #Context HttpServletRequest request
After that I use URIBuilder (not UriBuilder) from Apache HttpClient utils to change the Host to the correct one as jax-rs UriBuilder in immutable:
new URIBuilder(uriInfo.getBaseUriBuilder().path("/MyPath").queryParam("MyParameter",myParameterValue)).build()).setHost(request.getServerName()).toString()
I also had to include setPort() and setScheme() to make sure the correct port and scheme are used (the correct ones are in HttpServletRequest, not UriInfo)
I just faced this very issue on my Jersey based application, I used uriInfo.getBaseUriBuilder() to get a UrlBuilder and figured out that it's possible to change the hostname from localhost by using the .host() method
.host(InetAddress.getLocalHost().getHostName())
And you can remove the port part by setting it to -1
.port(-1)
So from a URL that looks like
https://127.0.0.1:8443/hello
I got
https://yourhostname/hello

HaProxy Transparent Proxy To AWS S3 Static Website Page

I am using haproxy to balance a cluster of servers. I am attempting to add a maintenance page to the haproxy configuration. I believe I can do this by defining a server declaration in the backend with the 'backup' modifier. Question I have is, how can I use a maintenance page hosted remotely on AWS S3 bucket (static website) without actually redirecting the user to that page (i.e. the haproxy server 'redir' definition).
If I have servers: a, b, c. All servers go down for maintenance then I want all requests to be resolved by server definition d (which is labeled with 'backup') to a static address on S3. Note, that I don't want paths to carry over and be evaluated on s3, it should always render the static maintenance page.
This is definitely possible.
First, declare a backup server, which will only be used if the non-backup servers are down.
server s3-fallback example.com.s3-website-us-east-1.amazonaws.com:80 backup
The following configuration entries are used to modify the request or the response only if we're using the alternate path. We're using two tests in the following examples:
# { nbsrv le 1 } -- if the number of servers in this backend is <= 1
# (and)
# { srv_is_up(s3-fallback) } -- if the server named "s3-fallback" is up; "server name" is the arbitrary name we gave the server in the config file
# (which would mean it's the "1" server that is up for this backend)
So, now that we have a backup back-end, we need a couple of other directives.
Force the path to / regardless of the request path.
http-request set-path / if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If you're using an essentially empty bucket with an error document, then this isn't really needed, since any request path would generate the same error.
Next, we need to set the Host: header in the outgoing request to match the name of the bucket. This isn't technically needed if the bucket is named the same as the Host: header that's already present in the request we received from the browser, but probably still a good idea. If the bucket name is different, it needs to go here.
http-request set-header host example.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If the bucket name is not a valid DNS name, then you should include the entire web site endpoint here. For a bucket called "example" --
http-request set-header host example.s3-website-us-east-1.amazonaws.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If your clients are sending you their cookies, there's no need to relay these to S3. If the clients are HTTPS and the S3 connection is HTTP, you definitely wat to strip these.
http-request del-header cookie if { nbsrv le 1 } { srv_is_up(s3-fallback) }
Now, handling the response...
You probably don't want browsers to cache the responses from this alternate back-end.
http-response set-header cache-control no-cache if { nbsrv le 1 } { srv_is_up(s3-fallback) }
You also probably don't want to return "200 OK" for these responses, since technically, you are displaying an error page, and you don't want search engines to try to index this stuff. Here, I've chosen "503 Service Unavailable" but any valid response code would work... 500 or 502, for example.
http-response set-status 503 if { nbsrv le 1 } { srv_is_up(s3-fallback) }
And, there you have it -- using an S3 bucket website endpoint as a backup backend, behaving no differently than any other backend. No browser redirect.
You could also configure the request to S3 to use HTTPS, but since you're just fetching static content, that seems unnecessary. If the browser is connecting to the proxy with HTTPS, that section of the connection will still be secure, although you do need to scrub anything sensitive from the browser's request, since it will be forwarded to S3 unencrypted (see "cookie," above).
This solution is tested on HAProxy 1.6.4.
Note that by default, the DNS lookup for the S3 endpoint will only be done when HAProxy is restarted. If that IP address changes, HAProxy will not see the change, without additional configuration -- which is outside the scope of this question, but see the resolvers section of the configuration manual.
I do use S3 as a back-end server behind HAProxy in several different systems, and I find this to be an excellent solution to a number of different issues.
However, there is a simpler way to have a custom error page for use when all the backends are down, if that's what you want.
errorfile 503 /etc/haproxy/errors/503.http
This directive is usually found in global configuration, but it's also valid in a backend -- so this raw file will be automatically returned by the proxy for any request that tries to use this back-end, if all of the servers in this back-end are unhealthy.
The file is a raw HTTP response. It's essentially just written out to the client as it exists on the disk, with zero processing, so you have to include the desired response headers, including Connection: close. Each line of the headers and the line after the headers must end with \r\n to be a valid HTTP response. You can also just copy one of the others, and modify it as needed.
These files are limited by the size of a response buffer, which I believe is tune.bufsize, which defaults to 16,384 bytes... so it's only really good for small files.
HTTP/1.0 503 Service Unavailable\r\n
Cache-Control: no-cache\r\n
Connection: close\r\n
Content-Type: text/plain\r\n
\r\n
This site is offline.
Finally, note that in spite of the fact that you're wanting to "transparently proxy a request," I don't think the phrase "transparent proxy" is the correct one for what you're trying to do, because a "transparent proxy" implies that either the client or the server or both would see each other's IP addresses on the connection and think they were communicating directly, with no proxy in between, because of some skullduggery done by the proxy and/or network infrastructure to conceal the proxy's existence in the path. This is not what you're looking for.

Client IP Address from Request Header in Rails

Can I access the IP address of the client from request header?If yes,how?
Yes you can.
visitor_ip = request.env['REMOTE_ADDR']

ISAPI Filter LDAP Authentication Error on DMZ Server

I am writing an ISAPI filter for a web server that we have running in a DMZ. This ISAPI filter needs to connect to our internal domain controllers to authenticate against Active Directory. There is a rule in the firewall to allow traffic from the DMZ server to our domain controller on port 636 and the firewall shows that the traffic is passing through just fine. The problem lies in the ldap_connect() function. I am getting an error 0x51 Server Down when attempting to establish the connection. We use the domain controllers IP address instead of the DNS name since the web server's outside the domain.
ISAPI LDAP connection code:
// Set search criteria
strcpy(search, "(sAMAccountName=");
strcat(search, username);
strcat(search, ")");
// Set timeout
time.tv_sec = 30;
time.tv_usec = 30;
// Setup user authentication
AuthId.User = (unsigned char *) username;
AuthId.UserLength = strlen(username);
AuthId.Password = (unsigned char *) password;
AuthId.PasswordLength = strlen(password);
AuthId.Domain = (unsigned char *) domain;
AuthId.DomainLength = strlen(domain);
AuthId.Flags = SEC_WINNT_AUTH_IDENTITY_ANSI;
// Initialize LDAP connection
ldap = ldap_sslinit(servers, LDAP_SSL_PORT, 1);
if (ldap != NULL)
{
// Set LDAP options
ldap_set_option(ldap, LDAP_OPT_PROTOCOL_VERSION, (void *) &version);
ldap_set_option(ldap, LDAP_OPT_SSL, LDAP_OPT_ON);
// Make the connection
//
// FAILS HERE!
//
ldap_response = ldap_connect(ldap, &time);
if (ldap_response == LDAP_SUCCESS)
{
// Bind to LDAP connection
ldap_response = ldap_bind_s(ldap, (PCHAR) AuthId.User, (PCHAR) &AuthId, LDAP_AUTH_NTLM);
}
}
// Unbind LDAP connection if LDAP is established
if (ldap != NULL)
ldap_unbind(ldap);
// Return string
return valid_user;
servers = <DC IP Address>
I have tested this code on my local machine that is within the same domain as AD, and it works, both LDAP and LDAP over SSL. We have a server certificate installed on our domain controller from the Active Directory Enrollment Policy but I read elsewhere that I might need to install a client certificate as well (for our web server). Is this true?
Also, we have a separate wordpress site running on the same DMZ web server that connects to LDAP over SSL just fine. It uses OpenLDAP through PHP to connect and uses the IP address of our domain controllers to connect. We have an ldap.conf file that with a line of code: TLS_REQCERT never. Is there a way to mimic this effect in Visual C with what I'm trying to do for the ISAPI filter? Hoping this is a programming issue more than a certificate issue. If this is out of the realm of programming, please let me know or redirect me to a better place to post this.
Thanks!
Solved the problem by adding the CA to the certificate store on the web server. The CA was never copied over before.