API Connect on IBM Cloud: error when trying to expose an local API through API connect on cloud - apiconnect

I'm not able to expose a local rest API through API connect on cloud.
API Connect on Cloud : error
I created a Rest API in my laptop using IIS, and I want to expose it through API Connect on IBM Cloud. Since the "Push Rest API" option in IIB Web Admin is not working, I used the swagger.json file to get the API details manually to APIC on Cloud. I followed the following steps:
In IBM Cloud created resources for API Connect and Secure Gateway cloud foundry services
Created a Secure Gateway Destination and have the SG client running in my laptop
Created a simple Rest API using IIB V10 and deployed it to my local Integration Node.
Tried to push the Rest API using the IIB Web admin by giving host as api.us-south.apiconnect.appdomain.cloud and my IBM Cloud account username/pwd, but it failed saying unable to connect:
Unable to connect to IBM API Connect at host 'api.us-south.apiconnect.appdomain.cloud' port '443'
Then I tried to create an API manually using the swagger.json file available in the IIB RestAPI project. I used the option to create new API using "from file or URL" option in the APIC on IBM Cloud.
I gave my laptop IP as the "Host" value in APIC designer
In the "Assembly", I included a "Proxy" policy and updated its Target URL to cap-sg-prd-2.securegateway.appdomain.cloud:17041
When I try to test the above, I get the following error:
<httpMessage>Internal Server Error</httpMessage>
<moreInformation>Backside URL invalid</moreInformation>
Can you please help to resolve it?

You're missing one or both of the following:
1) The "Target URL" must be a valid URL. Looks like you just entered a hostname, so likely you need https://cap-sg-prd-2.securegateway.appdomain.cloud:17041 Doing that and republishing the API should resolve the "Backside URL invalid" error.
Once you do that, you may find that you still can't reach the backend due to either a timeout or connection refused error.
If so:
2) Did you allow access to the secure gateway destination via the client on your local machine? You have to intentionally set an ACL on the client to allow traffic to the host/port on your network.

Related

Unable to create a connection (JWT/OAuth) with Google Bigquery connector

I am facing connection issues with Google Bigquery connector version 1.0.0 newly launched by MuleSoft in Mar 2022.
I've created a service account as well as an OAuth web application in Google Cloud Platform(GCP) and used values from the JSON file generated by GCP.
Test Connection is getting failed but the application deployed successfully and when the flow reaches any BigQuery connector operation, error raised! (Please see attached images)
I failed to connect using "JWT Connection" as well as "Oauth2 Connection".
Can someone guide how to connect to the Google Bigquery connector?
JWT Connection Image
OAuth Connection Image

Using one user database to authenticate users in 2 different cloud based (aws/gcp) and only once

So this is the case:
What we have:
We have a Service (webApp) in kubernetes API + Vue.js frontend hosted in AWS.
And we also have some Services (webApp) in Kubernetes API + React.js frontend hosted in GCP.
We are able to use same domain subdomains for each (like: a.domain.com and b.domain.com).
What we need:
We need to let the user think this 2 servers are only one.
The idea is that the user uses the same Username/password for both servers, but
the most immportant part is that he only needs to log in to one of this to be logged in the other automaticaly.
We have the parent domain in Google, and we prefer a solution that is implemented in GCP and the AWS server should just consume this auth method/config/etc.
Love to hear some ideas
If you have multiple replicas of the webApp frontend distributed in multi-cloud environments, you can use dynamic DNS and load balancing services like Cloudflare provides to distribute the access to your app frontend as explained here.
Then you need connect the multi-cloud VPCs and make accessible your backend to your multiple frontends.
You can use managed VPN services from both cloud providers to have an encrypted channel between VPCs in both cloud environments and to transfer data by using private IP addresses.
Google offers Cloud VPN as a managed VPN service for encrypted IPsec tunnels, which can be used on the Google end. AWS offers AWS Site-to-Site VPN and Azure offers Azure VPN gateway. You can connect your VPCs between the environments by using one or multiple VPN tunnels.
With that you can operate your webApp on multi cloud.

Getting 403 Forbidden on Google Cloud Run with API key

I have set up a very simple Node application with Express on Google Cloud Run.
It works great, but when I set it up with "Allow unauthenticated invocations to [service] (y/N)?" to No, I get a 403 Forbidden even though I created an API key and I'm making the calls adding key=[My API key] in the query string, as told in the documentation. My URL ends up looking like
https://service-wodkdj77sba-ew.a.run.app?key=[My API key].
I've tried with restricted (for Google Cloud Run) and unrestricted API keys.
Is there anything I'm missing?
Cloud Run, like many product in GCP, doesn't support API Key authorization. As detailed in your provided link, only a subset of service use API KEY.
It's also mentioned :
API keys do not identify the user or the application making the API request, so you can't restrict access to specific users or service accounts.
Where Cloud Run authentication section specify this here
All Cloud Run services are deployed privately by default, which means that they can't be accessed without providing authentication credentials in the request.
By the way, the Cloud Run expectation and the API Key capabilities aren't compatible.
However, if you want to access to your Cloud Run private service with API Key a workaround exist. You can deploy an Extensible Service Proxy (ESP) on another Cloud Run service. In it, authenticate the API Key and, if it's valid, call the Cloud Run private service with the ServiceAccount of your ESP (which must have roles/run.invoke role).

Spinnaker GKE oauth - User's info does not have all required fields

I'm trying to get my spinnaker interface authenticated using this tutorial:
https://www.spinnaker.io/setup/quickstart/halyard-gke-public/
Prior to the tutorial, spinnaker was confirmed up and running on http://localhost:9000. I have tried the following on 1.3.1, 1.4.1 and 1.4.2.
After editing/applying/enabling the google security, I try the login and successfully am challenged with a google login screen. Upon completing the two-factor auth and I am redirected as expected to http://localhost:8084/login, though I receive the following error:
{
"error": "Unauthorized",
"message": "Authentication Failed: User's info does not have all required fields.",
"status": 401,
"timestamp": 1506985726074
}
Here is a log of my setup steps:
kross#halyard:~$ hal config security authn oauth2 edit --provider google \
> --client-id $CLIENT_ID \
> --client-secret $CLIENT_SECRET \
> --user-info-requirements hd=$DOMAIN
+ Get current deployment
Success
+ Get authentication settings
Success
+ Edit oauth2 authentication settings
Success
Problems in default.security:
- WARNING Your UI or API domain does not have override base URLs
set even though your Spinnaker deployment is a Distributed deployment on a
remote cloud provider. As a result, you will need to open SSH tunnels against
that deployment to access Spinnaker.
? We recommend that you instead configure an authentication
mechanism (OAuth2, SAML2, or x509) to make it easier to access Spinnaker
securely, and then register the intended Domain and IP addresses that your
publicly facing services will be using.
+ Successfully edited oauth2 method.
kross#halyard:~$ hal config security authn oauth2 enable
+ Get current deployment
Success
+ Edit oauth2 authentication settings
Success
Problems in default.security:
- WARNING Your UI or API domain does not have override base URLs
set even though your Spinnaker deployment is a Distributed deployment on a
remote cloud provider. As a result, you will need to open SSH tunnels against
that deployment to access Spinnaker.
? We recommend that you instead configure an authentication
mechanism (OAuth2, SAML2, or x509) to make it easier to access Spinnaker
securely, and then register the intended Domain and IP addresses that your
publicly facing services will be using.
+ Successfully enabled oauth2
kross#halyard:~$ hal deploy apply
+ Get current deployment
Success
+ Apply deployment
Success
+ Deploy spin-clouddriver
Success
+ Deploy spin-front50
Success
+ Deploy spin-orca
Success
+ Deploy spin-deck
Success
+ Deploy spin-echo
Success
+ Deploy spin-gate
Success
+ Deploy spin-igor
Success
+ Deploy spin-rosco
Success
Problems in default.security:
- WARNING Your UI or API domain does not have override base URLs
set even though your Spinnaker deployment is a Distributed deployment on a
remote cloud provider. As a result, you will need to open SSH tunnels against
that deployment to access Spinnaker.
? We recommend that you instead configure an authentication
mechanism (OAuth2, SAML2, or x509) to make it easier to access Spinnaker
securely, and then register the intended Domain and IP addresses that your
publicly facing services will be using.
I'm not quite sure what to do with this. It seems I am authentic, yet for some reason the required user fields are not allowed in the interaction.
I have reviewed spinnaker's authentication setup as well and repeatedly made a few changes and tested with a fresh incognito browser, yet no change.
Since the google provider is a packaged OAuth 2 provider with spinnaker, I'm confused as to what further configuration would be necessary, as I am not "bringing my own provider".
Where can I start looking next? Any references/pointers to documentation?
The problem is the --user-info-requirements hd=$DOMAIN argument. This is (generally) only needed if you're using a G Suite/Google Apps for Work account as your OAuth identity provider - it restricts login to only users in your domain. Otherwise, anyone with a valid #gmail account would be able login.
If you do use the --user-info-requirements hd=$DOMAIN and the $DOMAIN specified is invalid, you will receive this error. Be sure to use the fully qualified domain name as the value.

Unable to consume REST API in WSO2 API Store

Have installed the API Manager 1.10.0 on a single machine and got everything running. Created and published API containing Openstack's Keystone URL. However when i try to consume API via API console in API store i get the MANAGEMENT CONSOLE as i response.
Have looked at the curl sent and the IP is not right.
Curl request from API Console
Keystone API URLs
Why am i not able to use the API? Why is the Production endpoint in the API overview not used? (it works perfectly fine with a REST Client or even with the same Curl request once i change to IP)
When we construct API endpoint URLs we will use following properties defined in API Manager configuration file(api-manager.xml). If you haven't changed anything there then default ports(8280/8243) will appear there. If you can please try this with private browsing window with https session.
And if you replace curl with IP and correct port 8280, 8243 then did it worked as expected?
<GatewayEndpoint>http://${carbon.local.ip}:${http.nio.port},https://${carbon.local.ip}:${https.nio.port}</GatewayEndpoint>
Thanks
sanjeewa.