"Execution failed" when setting up API Gateway and Fargate with AWS CDK - aws-fargate

I am trying to setup AWS API Gateway to access a fargate container in a private VPC as described here. For this I am using AWS CDK as described below. But when I curl the endpoint after successful cdk deploy I get "Internal Server Error" as a response. I can't find any additional information. For some reason API GW can't reach the container.
So when I curl the endpoint like this:
curl - i https://xxx.execute-api.eu-central-1.amazonaws.com/prod/MyResource
... I get the following log output in cloud watch:
Extended Request Id: NpuEPFWHliAFm_w=
Verifying Usage Plan for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21. API Key: API Stage: ...
PI Key authorized because method 'ANY /MyResource/{proxy+}' does not require API Key. Request will not contribute to throttle or quota limits
Usage Plan check succeeded for API Key and API Stage ...
Starting execution for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21
HTTP Method: GET, Resource Path: /MyResource/test
Execution failed due to configuration error: There was an internal error while executing your request
CDK Code
First I create a network load balanced fargate service:
private setupService(): NetworkLoadBalancedFargateService {
const vpc = new Vpc(this, 'MyVpc');
const cluster = new Cluster(this, 'MyCluster', {
vpc: vpc,
});
cluster.connections.allowFromAnyIpv4(Port.tcp(5050));
const taskDefinition = new FargateTaskDefinition(this, 'MyTaskDefinition');
const container = taskDefinition.addContainer('MyContainer', {
image: ContainerImage.fromRegistry('vad1mo/hello-world-rest'),
});
container.addPortMappings({
containerPort: 5050,
hostPort: 5050,
});
const service = new NetworkLoadBalancedFargateService(this, 'MyFargateServie', {
cluster,
taskDefinition,
assignPublicIp: true,
});
service.service.connections.allowFromAnyIpv4(Port.tcp(5050));
return service;
}
Next I create the VpcLink and the API Gateway:
private setupApiGw(service: NetworkLoadBalancedFargateService) {
const api = new RestApi(this, `MyApi`, {
restApiName: `MyApi`,
deployOptions: {
loggingLevel: MethodLoggingLevel.INFO,
},
});
// setup api resource which forwards to container
const resource = api.root.addResource('MyResource');
resource.addProxy({
anyMethod: true,
defaultIntegration: new HttpIntegration('http://localhost.com:5050', {
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
}),
},
proxy: true,
}),
defaultMethodOptions: {
authorizationType: AuthorizationType.NONE,
},
});
resource.addMethod('ANY');
this.addCorsOptions(resource);
}
Anyone has a clue what is wrong with this config?

After hours of trying I finally figured out that the security groups do not seem to be updated correctly when setting up the VpcLink with CDK. Broadening the allowed connection with
service.service.connections.allowFromAnyIpv4(Port.allTraffic())
solved it. Still need to figure out which minimum set needs to be set instead of allTrafic()
Additionally I replaced localhost in the HttpIntegration by the endpoint of the load balancer like this:
resource.addMethod("ANY", new HttpIntegration(
'http://' + service.loadBalancer.loadBalancerDnsName,
{
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
})
},
}
))

Related

No Host in request URL for Grafana datasource plugin tutorial - Add authentication

I'm trying to follow the example for developing a datasource plugin from Grafana. Ultimately I want my plugin to use Oauth, but even with just the basic Grafana datasource proxy example I seem to be having issues.
I have updated my plugin.json, class and constructor.
I have setup this hard coded example.
in plugin.json
{
"path": "grafana",
"url": "https://github.com"
}
],
And a sample testDataSource()
async testDatasource() {
return getBackendSrv()
.datasourceRequest({
url: this.url + '/grafana/grafana',
method: 'GET',
})
.then(response => {
if (response.status === 200) {
return { status: 'success', message: 'Data source is working', title: 'Success' };
} else {
return { status: 'failure', message: 'Data source is not working: ' + response.status, title: 'Failure' };
}
});
}
When I try and save/test this datasource to call that method, I get in the frontend a
HTTP Error Bad Gateway
And in the logs
t=2021-09-17T14:31:22+0000 lvl=eror msg="Data proxy error" logger=data-proxy-log userId=1 orgId=1 uname=admin path=/api/datasources/proxy/9/grafana/grafana remote_addr=172.17.0.1 referer=http://localhost:3000/datasources/edit/9/ error="http: proxy error: http: no Host in request URL"
I would've expected the request to be routed to the datasource proxy and for that to make the request to github but it seems Grafana is making a request to /api/datasources/proxy/9/grafana/grafana and nothing is picking it up?
Looking up my datasource via API, there's nothing listed for URL.
You will need to render this in your ConfigEditor.tsx
<DataSourceHttpSettings
defaultUrl="http://localhost:8080"
dataSourceConfig={options}
onChange={onOptionsChange}
/>
Which will give you the basic form with URL, whitelist, auth options that you see on most plugins. The URL there I guess should match what you have in your routes.

node S3 Object Storage Linode

Im trying to use the aws-sdk to acces my linode S3 compatible bucket, but everything I try doesn't work. Not sure what the correct endpoint should be? For testing purposes is my bucket set to public read/write.
const s3 = new S3({
endpoint: "https://linodeobjects.com",
region: eu-central-1,
accesKeyId: <accesKey>,
secretAccessKey: <secretKey>,
});
const params = {
Bucket: bucketName,
Key: "someKey",
Expires: 60,
};
const uploadURL = await s3.getSignedUrlPromise("putObject", params);
The error im getting
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'Could not load credentials from any providers',
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'EC2 Metadata roleName request returned error',
code: 'TimeoutError',
time: 2021-07-15T08:29:49.999Z,
retryable: true,
originalError: [Object]
}
}
}
It seems like a problem with the credentials of the environment that this code is executed in and not with the bucket permissions themselves.
The pre-signing of the URL is an operation that is done entirely locally. It uses local credentials (i.e., access key ID and secret access key) to create a sigv4 signature for the URL. This also means that whether or not the credentials used for signing the URL are valid is only checked at the moment the URL is used, and not at the moment of signing the URL itself.
The error simply indicates that from all the ways the SDK is trying to find credentials (more info here) it cannot find credentials it can use to sign the URL.
This might be unrelated, but according to the documentation, the endpoint should be the following: The endpoint URI to send requests to. The default endpoint is built from the configured region. The endpoint should be a string like 'https://{service}.{region}.amazonaws.com' or an Endpoint object. Which, in the code example above, is not the case.
You should set the endpoint to be eu-central-1.linodeobjects.com. When using Linode object storage the region is not determined by the endpoint that you use.

Infinispan java.lang.SecurityException: ISPN006017: Unauthorized 'PUT' operation

I am trying to put a value in Infinispan cache using Hotrod nodeJS client. The code runs fine if the server is installed locally. However, when I run the same code with Infinispan server hosted on docker container I get the following error
java.lang.SecurityException: ISPN006017: Unauthorized 'PUT' operation
try {
client = await infinispan.client({
port: 11222,
host: '127.0.0.1'
}, {
cacheName: 'testcache'
});
console.log(`Connected to cache`);
await client.put('test', 'hello 1');
await client.disconnect();
} catch (e) {
console.log(e);
await client.disconnect();
}
I have tried setting CORS Allow all option on the server as well
Need to provide custom config.yaml to docker with following configurations
endpoints:
hotrod:
auth: false
enabled: false
qop: auth
serverName: infinispan
Unfortunately the nodejs client doesn't support authentication yet. The issue to implement this is https://issues.redhat.com/projects/HRJS/issues/HRJS-36

Unable to verify the first certificate using Amazon SDK and Minio

Trying to connect to a minio server using the following code:
var AWS = require('aws-sdk');
var s3 = new AWS.S3({
accessKeyId: 'minio' ,
secretAccessKey: 'minio123' ,
endpoint: 'https://minio.dev' ,
s3ForcePathStyle: true, // needed with minio?
signatureVersion: 'v4',
sslEnabled: false,
rejectUnauthorized: false
});
// putObject operation.
var params = {Bucket: 'documents', Key: 'testobject', Body: 'Hello from MinIO!!'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to documents/testobject");
});
// getObject operation.
var params = {Bucket: 'documents', Key: 'testobject'};
var file = require('fs').createWriteStream('/tmp/mykey');
s3.getObject(params).
on('httpData', function(chunk) { file.write(chunk); }).
on('httpDone', function() { file.end(); }).
send();
I get the following error:
{ Error: unable to verify the first certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1051:34)
at TLSSocket.emit (events.js:189:13)
at TLSSocket.EventEmitter.emit (domain.js:441:20)
at TLSSocket._finishInit (_tls_wrap.js:633:8)
message: 'unable to verify the first certificate',
code: 'NetworkingError',
region: 'us-east-1',
hostname: 'minio.dev',
retryable: true,
time: 2019-07-11T23:38:45.382Z }
I have passed the options "sslEnabled: false", but this doesn't change anything. I've also tried to disable SSL on the node side and it also fails to change the behavior.
Does anybody have any ideas on how to ignore the self signed cert error? (if that is the issue, which I believe it is)
const AWS = require('aws-sdk');
const https = require('https');
// Allow use with Minio
AWS.NodeHttpClient.sslAgent = new https.Agent({ rejectUnauthorized: process.env.NODE_TLS_REJECT_UNAUTHORIZED !== '0' });
// the rest of the code snippet remains unchanged
rejectUnauthorized: false is the key. In this example, I've tied it to the existence of a commonly used environment variable that toggles the behavior in the request module. AWS SDK doesn't use it for its API, but reusing it seemed appropriate since it performed the same function.
Now if NODE_TLS_REJECT_UNAUTHORIZED=0 is set, the whole Node process including the AWS SDK will work with mocked HTTPS endpoints.
WARNING: Only use this in a development environment, such as mocking public services on your local workstation. It can leave you open to Man-In-The-Middle attacks!

pouchdb - secure replication with remote LevelDB

I am keen on using PouchDB in browser memory for an Angular application. This PouchDB will replicate from a remote LevelDB database that is fed key-value pairs from an algorithm. So, on the remote end, I would install PouchDB-Server. On the local end, I would do the following (as described here) on a node prompt.
var localDB = new PouchDB('mylocaldb')
var remoteDB = new PouchDB('https://remote-ip-address:5984/myremotedb')
localDB.sync(remoteDB, {
live: true
}).on('change', function (change) {
// yo, something changed!
}).on('error', function (err) {
// yo, we got an error! (maybe the user went offline?)
});
How do we start a PouchDB instance that supports TLS for live replication as described in the snippet above?
How do I start a PouchDB instance that supports TLS for live replication?
So after some more searching, it is clear from this topic, HTTPS is not supported for PocuhDB-Server.
Sorry, I misunderstood your question. I thought you intend to connect to a CouchDB server with PouchDB through HTTPS. Therefore, the following answer actually doesn't answer your question.
I created a server.js file like below to communicate with my CouchDB through HTTPS. Please note that the SSL certificate is (in my case) self-signed, and also CouchDB listens by default on port 6984 in the case of TLS:
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0"; // Ignore rejection, becasue CouchDB SSL certificate is self-signed
//import PouchDB from 'pouchdb'
const PouchDB = require('pouchdb')
const db = new PouchDB('https://admin:****#192.168.1.106:6984/reproduce')
db.allDocs({
include_docs: true,
attachments: false
}).then(function (result) {
// handle result
console.log(result)
}).catch(function (err) {
console.log(err);
});
I'm running the above file with $ node server.js and I'm getting the expected results:
$ node server.js
{ total_rows: 3,
offset: 0,
rows:
[ { id: '5d6590d3-41c7-4011-be5d-b21f80079ae5',
key: '5d6590d3-41c7-4011-be5d-b21f80079ae5',
value: [Object],
doc: [Object] },
{ id: 'ec6a36d1-952e-4d86-9865-3587c6079fb5',
key: 'ec6a36d1-952e-4d86-9865-3587c6079fb5',
value: [Object],
doc: [Object] },
{ id: 'f508e7aa-b4dc-42fc-96be-b7c1ffa54172',
key: 'f508e7aa-b4dc-42fc-96be-b7c1ffa54172',
value: [Object],
doc: [Object] } ] }
I created the above code with NodeJS on server-side. However, if you want to communicate with CouchDB through HTTPS inside the browser, i.e. on client-side, you have to enable CORS on CouchDB.