I do not know much about computer networks but I've been dabbling with flutter and aws lambda.
I have a flutter (dart) code that uses http package to make an http request like the following:
import 'package:http/http.dart' as http
final response = await http.get('https://<address to my lambda function via api gateway>');
final body = response.body;
Looking at the http package in pub.dev, it says that the package is a A composable, Future-based library for making HTTP requests. and does not say anything about TLS(SSL). However, the url I provided in the above code is https generated by aws API gateway. So my question is, in the above code, is it using https or http? If it is using http, it is not secure hence, i need to add another layer of security to prevent hackers such as Man in the middle attack. If it is https, does that mean the data that gets sent is encrypted via TLS, hence I do not need any sort of asymmetric encryption between the client and the server?
Related
By watching Computerphile YouTube videos, I know that today's browsers perform a TLS Handshake with every HTTPS website they display for me on my PC. For this question, let us assume the request is pointed at my server, running an Express API, protected by a valid SSL certificate. Will a TLS Handshake be performed even when I send a POST request with the help of:
Requests module in Python, a simple POST request to my server from a Python script.
NodeJS (Express.JS), a simple POST request (containing username and password) from a HTML webpage to my server.
From a mobile app programmed in MIT App Inventor 2, which gives me an option of making a POST request.
... and not a browser?
I am asking this question in regards to an app I am programming, wherein the user has to identify himself with a key and a password, and I want the information (log-in and else) to be securely conveyed to my VPS.
HTTPS means HTTP inside TLS. This means accessing a https:// URL always requires also TLS and thus a TLS handshake. It does not matter if the client is a browser, Python code, NodeJS or whatever. It does not matter if it is GET or POST request either.
TL;DR: How to authenticate against NGINX if auth headers are not supported on client side?
I am building an IoT-related project using NGINX as a reverse proxy for the server side services and 1NCE as the LTE carrier for the mobile devices. All traffic is authenticated based on HTTPBasicAuth over SSL-encrypted connections and handling "normal" requests works as desired.
As mobile service might be interrupted and the Internet connection might be lost, I want to send SMS for critical status reports and alarm notifications. 1NCE supports SMS mobile originated SMS (MO SMS) which are handled by the 1NCE's internal infrastructure and forwarded to a configurable API endpoint. So, MO SMS are not delivered to a specified phone, but forwarded via an API request which I need to process on my side.
According to 1NCE's SMS documentation and in consultation with their customer support, SMS forwarding does not support any authentication headers. SMS forwarding can only be done by specifying an HTTPS URL (including the desired API endpoint) and a port. The incoming SMS is then wrapped in a request to the given URL and sent in the request body.
I want to add authentication to the SMS forwarding endpoint (receiving forwarded SMS on my side) as well and am currently wondering about how I could achieve this. NGINX supports authentication on subrequest which could be used to evaluate incoming requests by an internal service. So my first idea was to add some credentials to each SMS (as I am also responsible for the SMS sending part of the code on the mobile devices, I could implement whatever is needed) and check those credentials with an internal service called by NGINX's subrequest. However, this does not seem to be doable. According to this SO question GET requests are used for the internal subrequests hence any body of the incoming POST request is discarded. Therefore, the credentials of the forwarded SMS would also be not available to my internal auth service. Extending NGINX's auth capabilities by writing an custom Lua-based plugin was my second idea, but this does not only seem to be not feasible but is also not supported by the NGINX instance I am using (Lua modules are disabled, switching to openresty seems to be a big thing).
My last idea would be to forward all incoming requests to a Python web service (written in Flask, other services I am using are also written in Flask) and parsing the forwarded SMS in Python. Based on the result of the credential evaluation I could return an 401/Unauthorized status code if credentials provided in the SMS (which is part of the request body) are invalid and process the request otherwise. However, I think that this approach is quite ugly as all incoming requests need to be passed to Flask and invalid requests are not rejected at the level of my Reverse Proxy.
Do you have any ideas about how to approach this issue? What would be a considerable approach with regards to "best practises"? Can I extend NGINX in a way to solve this or should I completely drop NGINX in favor of a "better" proxy?
I've deployed to Google Cloud Run (fully managed) a gRPC server with the option "Required Authentication" set to true.
I'm trying to authenticate the calls from my gRPC client through a Google Service Account, however I'm always getting below exception.
Exception in thread "main" io.grpc.StatusRuntimeException: UNAUTHENTICATED: HTTP status code 401
Below is how I'm creating the gRPC channel and attaching the service account.
public GrpcClient(Channel channel) throws IOException {
Credentials credentials = GoogleCredentials.getApplicationDefault();
blockingStub = CalculatorServiceGrpc
.newBlockingStub(channel)
.withCallCredentials(MoreCallCredentials.from(credentials));
}
Obs.: env var GOOGLE_APPLICATION_CREDENTIALS is set with the path of the SA, and the SA has Cloud Run Invoker privilege
Is there anything that I'm missing?
When calling a Cloud Run server from a generic HTTP client, setting GOOGLE_APPLICATION_CREDENTIALS doesn't have an effect. (That only works when you call Google’s APIs with a Google client library.)
Even when deployed to Cloud Run, gRPC is just HTTP/2, so authenticating to a Cloud Run service is documented at the Service-to-Service Authentication page. In a nutshell this involves:
getting a JWT (identity token) from metadata service endpoint inside the container
setting it as a header on the request to the Cloud Run app, as Authorization: Bearer [[ID_TOKEN]].
In gRPC, headers are called "metadata", so you should find the equivalent gRPC Java method to set that. (It probably is a per-RPC option.)
Read about a Go example here, it basically explains you that gRPC servers running on Cloud Run still authenticate the same way. In this case, also make sure to tell Java:
you need to connect to domain:443 (not :80)
gRPC Java needs to use machine root CA certificates to verify validity of TLS certificate presented by Cloud Run (as opposed to skipping TLS verification)
After some more research I was able to authenticate the requests using IdTokenCredentials. See below the result.
public GrpcClient(Channel channel) throws IOException {
ServiceAccountCredentials saCreds = ServiceAccountCredentials
.fromStream(new FileInputStream("path\to\sa"));
IdTokenCredentials tokenCredential = IdTokenCredentials.newBuilder().setIdTokenProvider(saCreds)
.setTargetAudience("https://{projectId}-{hash}-uc.a.run.app").build();
blockingStub = CalculatorServiceGrpc
.newBlockingStub(channel)
.withCallCredentials(MoreCallCredentials.from(tokenCredential));
}
I came across this post looking for a Python-related answer.
So for those who want to solve this problem using a Python client:
import google.oauth2
import google.oauth2.id_token
import google.auth
import google.auth.transport
import google.auth.transport.requests
TARGET_CHANNEL = "your-app-name.run.app:443"
token = google.oauth2.id_token.fetch_id_token(google.auth.transport.requests.Request(), TARGET_CHANNEL)
call_cred = grpc.access_token_call_credentials(auth.get_identification_token(
"https://" + TARGET_CHANNEL.strip(':443')))
channel_cred = grpc.composite_channel_credentials(grpc.ssl_channel_credentials(), call_cred)
channel = grpc.secure_channel(TARGET_CHANNEL, credentials=channel_cred)`
I was also running into UNAUTHENTICATED: HTTP status code 401. I had a GCP Load Balancer all setup with HTTPS and a backend routing to a Cloud Run Service. The backend service has to be HTTP2 for grpc like the above answer. But there was one more spot that needed HTTP2. The cloud run service needs to be setup to accept HTTP2, by default it is HTTP1. You can use
gcloud run services update <SERVICE> --use-http2
to set the cloud run to use http2. This will allow the load balancers backend HTTP2 to communicate with the Cloud Run Service over HTTP which grpc is required to have.
https://cloud.google.com/run/docs/configuring/http2
Background:
I'm trying to use WSO2 ESB within a corporate setting to provide authenticated access to underlying REST API backend providers located either within the enterprise, or on the internet.
My goal is to selectively grant access, e.g. to REST API provider P1 only to REST client C1 and to to REST API provider P2 only to REST client C2.
Using WSO2 ESB with the "<api>" as described into http://wso2.com/library/articles/2012/10/implementing-restful-services-wso2-esb/ seems to impose to redefine every resource, which can be very large and error prone for complex APIs (e.g. vmware vcloud director REST API https://www.vmware.com/support/vcd/doc/rest-api-doc-1.5-html/landing-user_operations.html)
Using the WSO2 ESB "<proxy>", as described into
https://docs.wso2.org/display/ESB481/Using+REST+with+a+Proxy+Service#UsingRESTwithaProxyService-RESTClientandRESTService ("REST Client and REST Service") imposes that the URIs exposed to HTTP clients will be modified modified w.r.t. to the original backed uri. Typical proxy URIs will be of the following form with the services prefix and a specific port http://<wso2_host>:8280/services/CustomerServiceProxy/customers/123
While having modified exposed URIs is fine when the client can be controlled (typically an in house custom REST API). It is problematic when the REST API is an industry standard and the client is an SDK, or an off-the-shelf application which is outside of the control of WSO2 users (e.g. AWS S3 API, or vmware vcloud director REST API)
In addition, some custom clients/SDKs may verify server-side SSL certificates against a public key embedded into the SDK/client.
The usual solution to preserve the HTTP REST API as-is and add some authentication on top of it is to expose the API through an HTTP proxy (possibly authenticating clients through HTTP proxy authentication), i.e. client send a CONNECT request prior to sending their original request. This preserves the full URIs and also the SSL certificates.
Question:
Is there a way to have WSO2 ESB play the role of an HTTP(S) proxy for mediating incoming REST API requests, preserving original URIs and server SSL certificates ?
I'm thinking about a new "<http-proxy>" syntax, I haven't yet spotted. I.e. it would listen to http://<wso2_host>:3128/ and respond to CONNECT requests. The mediation would then have the ability to accept or not the CONNECT depending on the CONNECT request inputs (proxy authentication, requested host), and other http transport headers). Once the CONNECT request is granted, it might even be possible to act on subsequent individual proxified requests
Best specs describing the CONNECT behavior seem https://datatracker.ietf.org/doc/html/draft-luotonen-web-proxy-tunneling-01 (1999 draft that seems adopted) and https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-p2-semantics-22#page-29 proposed standard.
For HTTPS URI, there might be limited ability within the WSO2 mediation: the HTTP request is SSL encrypted and only the domain can be known if SNI (Server Name Indication) is specified in the request. At least this would enable to grant/deny some host names to a set of clients depending on proxy authentication.
You may wish to try the <property name="preserveProcessedHeaders" value="true"/> in your <inSequence>. This property will pass all security headers through the proxy. I'm not sure about server certificates.
Here is an example of that property in use:
https://docs.wso2.org/display/ESB481/Sample+153%3A+Routing+Messages+that+Arrive+to+a+Proxy+Service+without+Processing+Security+Headers
I hope tlevel for API usehat helps. You may also want to look into the wso2 API manager, which lets you selectively grant access to APIs.
I'm trying to build a complete web caching proxy using Boost Asio and LibCURL, I've already built the server and everything works fine. It receives http requests (GET, POST, upload using POST ...) correctly and also it sends back the responses to the browser for e.g correctly.
Now, I want to extend it, so it can handles https requests. I read about it in LibCURL web site http://curl.haxx.se/libcurl/c/libcurl-tutorial.html (proxy section), I understood how it works and I have a clear idea how it should be done. But I didn't find a good documentation about how proxies handle https requests. and:
what are the possible messages (information, format, length ...) exchanged by the source application and the proxy ?
things to consider.
...
Thanks in advance :-) .
You will receive the CONNECT command in plain text, and respond to it ditto, then the communications after that will be encrypted. If your proxy is to be an SSL endpoint, which is highly problematic given that HTTPS requires a certificate that matches the target host-address, you will then need to enter SSL mode on both connections. More probably you should just start copying bytes in both directions without attempting to process the contents.