Private WAF on reserved instance IBM API Connect - apiconnect

We need to protect our reserved instance of IBM API Connect that we have in the Cloud with a WAF of our own company and we do not know if this is possible and the steps to perform or if it is only possible with a WAF of IBM's own cloud.
thanks in advance

For this answer, I'm going to assume you're asking primarily about the DataPower API Gateway.
You can either deploy your own gateway in an environment of your choosing (i.e. you're managing it) or leverage the one that IBM provides to you by default.
If you deploy your own, then you control the networking and adding your own WAF is relatively straightforward.
If you use an IBM-managed gateway, then a little creativity is required. You would likely need to set up a Mutual TLS contract between your WAF and the Gateway. You'd terminate the incoming TLS connection at the WAF (e.g. Cloudflare) and then re-encrypt the traffic from the WAF to the Gateway using the client certificate exchange. You'd potentially need to apply a Mutual TLS-enforcing profile to each deployed API on the Gateway. In this scenario, no client can call an API on your gateway without the proper TLS client key/certificate in hand.
You may want to open a support ticket for further/deeper assistance on this topic.

Related

IBM API Connect health check

Would like to know in API Connect, is there a built-in way to check health of an API in Prod? (Like Springboot actuator/health)
If not, best way recommended to implement health check for each of our APIs that we are about to develop in API Connect
Regards,
Martand
API Connect v2018 portals do not have an inbuilt health check, they removed it in v5. There is a RFE to request this functionality.
For the gateway component the best way is to use a tcp_half_open health check on the IP/Port that your gateway is listening on. Note it has to be tcp_half_open otherwise your appliance will be spammed with TLS handshake errors.
This will confirm that the gateway is up and running and serving requests. You should also check the management port is running, as your gateway might not be synchronizing with the APIM and it's possible that it is serving up an old API.
Health checks on an individual API are a little more difficult, and would need to be added as a separate operation in your spec.

Is there built-in support for enabling SSL on Azure Container Instances?

Is there built-in support for enabling SSL on Azure Container Instances? If not, can we hook up to SSL providers like Lets Encrypt?
There is nothing built-in today. You need to load the certs into the container and terminate SSL there. Soon, we will enable support for ACI containers to join an Azure virtual network, at which point you could front your containers with Azure Application Gateway and terminate SSL there.
As said above, no support today for built-in SSL when using ACI. I'm using Azure Application Gateway to publish my container endpoint using the HTTP-to-HTTPS bridge. This way, App Gateway needs a regular HTTPS cert (and you can use whichever model works best for you as long as you can introduce a .PFX file during provisioning or later during configuratiorn) and it will then use HTTP to talk to your (internally facing) ACI-based container. This approach becomes more secure if you bind your ACI-based container to a VNET and restrict traffic from elsewhere.
To use SSL within the ACI-container you'd need to introduce your certification while provisioning the container, and then somehow automate certificate expiration and renewal. As this is not supported in a reasonable way, I chose to use the App Gateway to resolve this. You could also use API Management but that is obviously slightly more expensive and introduces a lot more moving parts.
I blogged about this configuration here and the repo with provisioning scripts is here.
You can add SSL support at the API Gateway and simply configure the underlying API over HTTP.
You will need the secrete key to execute above api method!
You can access the underlying API hosted at the Azure Container Instance. This method does not require jwt token as this is a demo api.

openshift ssl edge termination risk

I have been reading the Openshift documentation for secured (SSL) routes.
Since I use a free plan, I can only have an "Edge Termination" route, meaning the SSL is ended when external requests reach the router, with contents being transmitted from the router to the internal service via HTTP.
Is this secure ? I mean, part of the information transmission is done via HTTP in the end.
The connection between where the secure connection is terminated and your application which accepts the proxied plain HTTP request is all internal to the OpenShift cluster. It doesn't travel through any public network in the clear. Further, the way the software defined networking in OpenShift works, it is not possible for any other normal user to see that traffic, nor can applications running in other projects see the traffic.
The only people who might be able to see the traffic are administrators of the OpenShift cluster, but the same people could access your application container also. Any administrators of the system could access your application container even if using a pass through secure connection terminated with your application. So is the same situation as most managed hosting, where you rely on the administrators of the service to do the right thing.

How to secure communication in a server-server app?

I have a microservices based web app. Microservices communicate with each other via a REST API exposed. I want an easy, yet secure solution to secure communication between my microservices. I've already used JWT protocol to secure my user-services communication but I can't figure out the best way to secure server-server communication.
Update:
I want an easy way to authenticate APIs. Is is a good way to hardcode key and secret or put them in configurations files and then use them to authenticate to an other end point?
I've heard about OAuth2 protocol but I'm afraid it's an overkill for my need.So What can be the easy and secure way to authenticate APIs?
You should use HTTPS in order to make communication between servers secure. As far as point to point security (transport layer security) is concerned this is the way to go.
But keep in mind that this still doesn't mean that you'll have message-level security (end-to-end security). Intermediaries (i.e. service agents or other services and applications) along the message path will be able to see what is in the message content while processing it.
REST relies on the uniform contract provided by HTTP, so you cannot use the advanced features of WS-Security as you would have with SOAP. The security features of SOAP provide a wider spectrum of options, so if security is key in your case, you should definitely check SOAP web services out.
Also, take a look at this question. It's relevant to yours and I'm sure you'll find it helpful.
Hope this helps!

Can WSO2 ESB play the role of an HTTP(S) proxy for mediating incoming REST API requests?

Background:
I'm trying to use WSO2 ESB within a corporate setting to provide authenticated access to underlying REST API backend providers located either within the enterprise, or on the internet.
My goal is to selectively grant access, e.g. to REST API provider P1 only to REST client C1 and to to REST API provider P2 only to REST client C2.
Using WSO2 ESB with the "<api>" as described into http://wso2.com/library/articles/2012/10/implementing-restful-services-wso2-esb/ seems to impose to redefine every resource, which can be very large and error prone for complex APIs (e.g. vmware vcloud director REST API https://www.vmware.com/support/vcd/doc/rest-api-doc-1.5-html/landing-user_operations.html)
Using the WSO2 ESB "<proxy>", as described into
https://docs.wso2.org/display/ESB481/Using+REST+with+a+Proxy+Service#UsingRESTwithaProxyService-RESTClientandRESTService ("REST Client and REST Service") imposes that the URIs exposed to HTTP clients will be modified modified w.r.t. to the original backed uri. Typical proxy URIs will be of the following form with the services prefix and a specific port http://<wso2_host>:8280/services/CustomerServiceProxy/customers/123
While having modified exposed URIs is fine when the client can be controlled (typically an in house custom REST API). It is problematic when the REST API is an industry standard and the client is an SDK, or an off-the-shelf application which is outside of the control of WSO2 users (e.g. AWS S3 API, or vmware vcloud director REST API)
In addition, some custom clients/SDKs may verify server-side SSL certificates against a public key embedded into the SDK/client.
The usual solution to preserve the HTTP REST API as-is and add some authentication on top of it is to expose the API through an HTTP proxy (possibly authenticating clients through HTTP proxy authentication), i.e. client send a CONNECT request prior to sending their original request. This preserves the full URIs and also the SSL certificates.
Question:
Is there a way to have WSO2 ESB play the role of an HTTP(S) proxy for mediating incoming REST API requests, preserving original URIs and server SSL certificates ?
I'm thinking about a new "<http-proxy>" syntax, I haven't yet spotted. I.e. it would listen to http://<wso2_host>:3128/ and respond to CONNECT requests. The mediation would then have the ability to accept or not the CONNECT depending on the CONNECT request inputs (proxy authentication, requested host), and other http transport headers). Once the CONNECT request is granted, it might even be possible to act on subsequent individual proxified requests
Best specs describing the CONNECT behavior seem https://datatracker.ietf.org/doc/html/draft-luotonen-web-proxy-tunneling-01 (1999 draft that seems adopted) and https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-p2-semantics-22#page-29 proposed standard.
For HTTPS URI, there might be limited ability within the WSO2 mediation: the HTTP request is SSL encrypted and only the domain can be known if SNI (Server Name Indication) is specified in the request. At least this would enable to grant/deny some host names to a set of clients depending on proxy authentication.
You may wish to try the <property name="preserveProcessedHeaders" value="true"/> in your <inSequence>. This property will pass all security headers through the proxy. I'm not sure about server certificates.
Here is an example of that property in use:
https://docs.wso2.org/display/ESB481/Sample+153%3A+Routing+Messages+that+Arrive+to+a+Proxy+Service+without+Processing+Security+Headers
I hope tlevel for API usehat helps. You may also want to look into the wso2 API manager, which lets you selectively grant access to APIs.