Consider the case where there exists a simple client-server web application where the client sends requests to the server. If the server sends a request to an external API, what IP and header values will be detected by the API? The ones of the client that first send the request to the server, or the ones of the server?
Only the actual IP that makes the request will be visible to the API. So if there is a chain of requests, only the last request IP will be accessible to the receiving party.
Related
TL;DR: How to authenticate against NGINX if auth headers are not supported on client side?
I am building an IoT-related project using NGINX as a reverse proxy for the server side services and 1NCE as the LTE carrier for the mobile devices. All traffic is authenticated based on HTTPBasicAuth over SSL-encrypted connections and handling "normal" requests works as desired.
As mobile service might be interrupted and the Internet connection might be lost, I want to send SMS for critical status reports and alarm notifications. 1NCE supports SMS mobile originated SMS (MO SMS) which are handled by the 1NCE's internal infrastructure and forwarded to a configurable API endpoint. So, MO SMS are not delivered to a specified phone, but forwarded via an API request which I need to process on my side.
According to 1NCE's SMS documentation and in consultation with their customer support, SMS forwarding does not support any authentication headers. SMS forwarding can only be done by specifying an HTTPS URL (including the desired API endpoint) and a port. The incoming SMS is then wrapped in a request to the given URL and sent in the request body.
I want to add authentication to the SMS forwarding endpoint (receiving forwarded SMS on my side) as well and am currently wondering about how I could achieve this. NGINX supports authentication on subrequest which could be used to evaluate incoming requests by an internal service. So my first idea was to add some credentials to each SMS (as I am also responsible for the SMS sending part of the code on the mobile devices, I could implement whatever is needed) and check those credentials with an internal service called by NGINX's subrequest. However, this does not seem to be doable. According to this SO question GET requests are used for the internal subrequests hence any body of the incoming POST request is discarded. Therefore, the credentials of the forwarded SMS would also be not available to my internal auth service. Extending NGINX's auth capabilities by writing an custom Lua-based plugin was my second idea, but this does not only seem to be not feasible but is also not supported by the NGINX instance I am using (Lua modules are disabled, switching to openresty seems to be a big thing).
My last idea would be to forward all incoming requests to a Python web service (written in Flask, other services I am using are also written in Flask) and parsing the forwarded SMS in Python. Based on the result of the credential evaluation I could return an 401/Unauthorized status code if credentials provided in the SMS (which is part of the request body) are invalid and process the request otherwise. However, I think that this approach is quite ugly as all incoming requests need to be passed to Flask and invalid requests are not rejected at the level of my Reverse Proxy.
Do you have any ideas about how to approach this issue? What would be a considerable approach with regards to "best practises"? Can I extend NGINX in a way to solve this or should I completely drop NGINX in favor of a "better" proxy?
I'm currently playing around with SignalR and websockets. From my research, it seems, as websockets do not support custom headers, there's basically only two ways to authenticate a websocket connection during token based authentication.
1) Passing the token in the query string
2) Storing the token in a cookie which then gets passed to the server when WithCredentials is set to true
The first method isn't great practice - even through websocket communication is encrypted, query strings may be logged by servers etc.
The second method I have got working on my local machine but it doesn't work once deployed because my client and server reside on different domains. So basically, I have an Angular site that has one domain (eg. client.com) and a WebAPI site that alls CORS with a completely different domain (eg. server.com). On my browser, if I'm on client.com, I cannot set a cookie that gets sent to server.com on a request.
What is a good way to authenticate websockets when client and server sit on different domains?
The WebSocket Protocol specification doesn't specify any particular way for authentication. You need to perform the authentication during the handshake phase and for that you can use any HTTP authentication mechanism like Basic, Digest, etc.
Further you could look into JWT token based authentication. Angular app can store the token in local storage and send it as a Transport header during the handshake request to the server. If the token is invalid, server can terminate the WebSocket connection upgrade request and the Angular app can re-direct the user to login page.
My goal is to have an Azure App Service, served from a custom domain over HTTPS.
This app receives HTTPS POST requests and should log the remote IP in the process.
I usually get the remote IP address, the IP of the calling client, like this:
HttpRequest request = ...
var IP = request.HttpContext.Connection.RemoteIpAddress;
To have the app served over HTTPS for a custom domain I enabled an azure CDN endpoint.
Now the IP I record is for the CDN server not the calling client.
Is it possible to get the originating IP?
The requests in question are HTTPS POST so CDN caching shouldn't be an issue.
Does the Azure CDN add any headers that could contain such info?
Does adding an SSL cert directly to the App Service change anything?
By testing I've found that Azure CDN adds the X-Forwarded-For header with the client's real IP.
The only mention of this header I've found is in Azure CDN documentation mentioning the header as being reserved.
This article describes how Azure CDN works, and we could know that user request a file via CDN, if the edge servers in the POP do not have the file in their cache, the edge server requests the file from the origin. The user request does not be sent directly from client user to the origin, so as you seen, it records the IP of the edge server instead of the calling client.
Background:
I'm trying to use WSO2 ESB within a corporate setting to provide authenticated access to underlying REST API backend providers located either within the enterprise, or on the internet.
My goal is to selectively grant access, e.g. to REST API provider P1 only to REST client C1 and to to REST API provider P2 only to REST client C2.
Using WSO2 ESB with the "<api>" as described into http://wso2.com/library/articles/2012/10/implementing-restful-services-wso2-esb/ seems to impose to redefine every resource, which can be very large and error prone for complex APIs (e.g. vmware vcloud director REST API https://www.vmware.com/support/vcd/doc/rest-api-doc-1.5-html/landing-user_operations.html)
Using the WSO2 ESB "<proxy>", as described into
https://docs.wso2.org/display/ESB481/Using+REST+with+a+Proxy+Service#UsingRESTwithaProxyService-RESTClientandRESTService ("REST Client and REST Service") imposes that the URIs exposed to HTTP clients will be modified modified w.r.t. to the original backed uri. Typical proxy URIs will be of the following form with the services prefix and a specific port http://<wso2_host>:8280/services/CustomerServiceProxy/customers/123
While having modified exposed URIs is fine when the client can be controlled (typically an in house custom REST API). It is problematic when the REST API is an industry standard and the client is an SDK, or an off-the-shelf application which is outside of the control of WSO2 users (e.g. AWS S3 API, or vmware vcloud director REST API)
In addition, some custom clients/SDKs may verify server-side SSL certificates against a public key embedded into the SDK/client.
The usual solution to preserve the HTTP REST API as-is and add some authentication on top of it is to expose the API through an HTTP proxy (possibly authenticating clients through HTTP proxy authentication), i.e. client send a CONNECT request prior to sending their original request. This preserves the full URIs and also the SSL certificates.
Question:
Is there a way to have WSO2 ESB play the role of an HTTP(S) proxy for mediating incoming REST API requests, preserving original URIs and server SSL certificates ?
I'm thinking about a new "<http-proxy>" syntax, I haven't yet spotted. I.e. it would listen to http://<wso2_host>:3128/ and respond to CONNECT requests. The mediation would then have the ability to accept or not the CONNECT depending on the CONNECT request inputs (proxy authentication, requested host), and other http transport headers). Once the CONNECT request is granted, it might even be possible to act on subsequent individual proxified requests
Best specs describing the CONNECT behavior seem https://datatracker.ietf.org/doc/html/draft-luotonen-web-proxy-tunneling-01 (1999 draft that seems adopted) and https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-p2-semantics-22#page-29 proposed standard.
For HTTPS URI, there might be limited ability within the WSO2 mediation: the HTTP request is SSL encrypted and only the domain can be known if SNI (Server Name Indication) is specified in the request. At least this would enable to grant/deny some host names to a set of clients depending on proxy authentication.
You may wish to try the <property name="preserveProcessedHeaders" value="true"/> in your <inSequence>. This property will pass all security headers through the proxy. I'm not sure about server certificates.
Here is an example of that property in use:
https://docs.wso2.org/display/ESB481/Sample+153%3A+Routing+Messages+that+Arrive+to+a+Proxy+Service+without+Processing+Security+Headers
I hope tlevel for API usehat helps. You may also want to look into the wso2 API manager, which lets you selectively grant access to APIs.
I have recently started my job as web application backend developer. I am bit stuck in understanding lifecycle of a Http request.
What I understood is
Every Http request first contacts a DNS server which resolves the request URL domain to a IP address.
After fetching the Webserver IP address request is forwarded to it(via PUT request). A webserver like apache handles this request and forwards this to application which has to handle this.
After this I am lost with
How response is sent by the application to the user who requested it and will Apcache involved in this?
Can I see the entire flow in my browser with some debugging tools?
Can someone refer some links to understand this in depth?
I think you are a bit wrong on your understanding of it.
If you go to www.google.com (not using any forms, just wanting the site), this is what happens:
First the browser needs to translate www.google.com to an IP address if it does not already know it. If it knows it, nothing happens at this point. If it does not know it, it contacts a DNS server to resolve the name.
Then browser will open a TCP connection to the IP address of www.google.com and send a HTTP GET request over. In this example it will be
GET / HTTP/1.1 Host: www.google.com
The server software will get this HTTP request. It will somehow generate a HTTP response and send that back trough the TCP connection. How the server does this is server software dependent. You can for example plug in application code in Apache, or just make Apache return a file from the filesystem. PHP is an application called by some software, which then generates the response sent to the browser. When the response is sent, in HTTP version 1.0 the connection is closed. HTTP 1.1 can have persistent connections though.
When the browser gets the response, it typically renders it on screen. The HTTP request is now done. A click on "search" will send a new request to the server.
GET, PUT, POST, DELETE and others are HTTP request methods. They have special meaning which you can see in the RFC.
Cookies are commonly used to identify the same user across multiple HTTP requests, called sessions. Therefore these cookies are called session cookies
You can debug the communication by using a network sniffer tool, for example Wireshark. Firefox has a third party plugin called Tamper Data that can change the request before they are sent to the server.
The HTTP RFC is a good source of how it all works.
while server receives the request from browser , the browser will be binded to some port on the host , ip address and port number of browser will be attached with the request that sends to server. server sends the responce to the ip address and port number
This is among the popular interview questions asked in various product based companies.
HTTP Is a request-response protocol. For example, a user agent initiates a request to a server, typically by opening a TCP/IP connection to a particular port on a host (port 80 by default). The request itself comprises:
a request line,
a set of request headers, and
an entity.
An HTTP server listening on that port waits for the client to send a request message. Upon receiving the request, the server sends a response that comprises:
a status line,
a set of response headers, and
an entity.
The entity in the request or response can be thought of simply as the payload, which may be binary data. The other items are readable ASCII characters. When the response has been completed, either the browser or the server may terminate the TCP/IP connection, or the browser can send another request.
I found this resource very helpful in understanding the steps taken during the HTTP lifecycle : quite interesting actually though, wasn't aware of all the intermediate steps especially w/the cache checking when determining the IP Address of a URL.
https://medium.com/#maneesha.wijesinghe1/what-happens-when-you-type-an-url-in-the-browser-and-press-enter-bb0aa2449c1a