I am using Spring Cloud Config Server to serve configuration for my client apps. To facilitate secrets configuration I am using HashiCorp Vault as a back end. For the remainder of the configuration I am using a GIT repo. So I have configured the config server in composite mode. See my config server bootstrap.yml below:-
server:
port: 8888
spring:
profiles:
active: local, git, vault
application:
name: my-domain-configuration-server
cloud:
config:
server:
git:
uri: https://mygit/my-domain-configuration
order: 1
vault:
order: 2
host: vault.mydomain.com
port: 8200
scheme: https
backend: mymount/generic
This is all working as expected. However, the token I am using is secured with a Vault auth policy. See below:-
{
"rules": "path "mymount/generic/myapp-app,local" {
policy = "read"
}
path "mymount/generic/myapp-app,local/*" {
policy = "read"
}
path "mymount/generic/myapp-app" {
policy = "read"
}
path "mymount/generic/myapp-app/*" {
policy = "read"
}
path "mymount/generic/application,local" {
policy = "read"
}
path "mymount/generic/application,local/*" {
policy = "read"
}
path "mymount/generic/application" {
policy = "read"
}
path "mymount/generic/application/*" {
policy = "read"
}"
}
My issue is that I am not storing secrets in all these scopes. I need to specify all these paths just so I can authorize the token to read one secret from mymount/generic/myapp-app,local. If I do not authorize all the other paths the VaultEnvironmentRepository.read() method returns a 403 HTTP status code (Forbidden) and throws a VaultException. This results in complete failure to retrieve any configuration for the app, including GIT based configuration. This is very limiting as client apps may have multiple Spring profiles that have nothing to do with retrieving configuration items. The issue is that config server will attempt to retrieve configuration for all the active profiles provided by the client.
Is there a way to enable fault tolerance or lenience on the config server, so that VaultEnvironmentRepository does not abort and returns any configuration that it is actually authorized to return?
Do you absolutely need the local profile? Would you not be able to get by with just the 'vault' and 'git' profiles in Config Server and use the 'default' profile in each Spring Boot application?
If you use the above suggestion then the only two paths you'd need in your rules (.hcl) file are:
path "mymount/generic/application" {
capabilities = ["read", "list"]
}
and
path "mymount/generic/myapp-app" {
capabilities = ["read", "list"]
}
This assumes that you're writing configuration to
vault write mymount/generic/myapp-app
and not
vault write mymount/generic/myapp-app,local
or similar.
Related
Im trying to use the aws-sdk to acces my linode S3 compatible bucket, but everything I try doesn't work. Not sure what the correct endpoint should be? For testing purposes is my bucket set to public read/write.
const s3 = new S3({
endpoint: "https://linodeobjects.com",
region: eu-central-1,
accesKeyId: <accesKey>,
secretAccessKey: <secretKey>,
});
const params = {
Bucket: bucketName,
Key: "someKey",
Expires: 60,
};
const uploadURL = await s3.getSignedUrlPromise("putObject", params);
The error im getting
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'Could not load credentials from any providers',
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'EC2 Metadata roleName request returned error',
code: 'TimeoutError',
time: 2021-07-15T08:29:49.999Z,
retryable: true,
originalError: [Object]
}
}
}
It seems like a problem with the credentials of the environment that this code is executed in and not with the bucket permissions themselves.
The pre-signing of the URL is an operation that is done entirely locally. It uses local credentials (i.e., access key ID and secret access key) to create a sigv4 signature for the URL. This also means that whether or not the credentials used for signing the URL are valid is only checked at the moment the URL is used, and not at the moment of signing the URL itself.
The error simply indicates that from all the ways the SDK is trying to find credentials (more info here) it cannot find credentials it can use to sign the URL.
This might be unrelated, but according to the documentation, the endpoint should be the following: The endpoint URI to send requests to. The default endpoint is built from the configured region. The endpoint should be a string like 'https://{service}.{region}.amazonaws.com' or an Endpoint object. Which, in the code example above, is not the case.
You should set the endpoint to be eu-central-1.linodeobjects.com. When using Linode object storage the region is not determined by the endpoint that you use.
I am trying to put a value in Infinispan cache using Hotrod nodeJS client. The code runs fine if the server is installed locally. However, when I run the same code with Infinispan server hosted on docker container I get the following error
java.lang.SecurityException: ISPN006017: Unauthorized 'PUT' operation
try {
client = await infinispan.client({
port: 11222,
host: '127.0.0.1'
}, {
cacheName: 'testcache'
});
console.log(`Connected to cache`);
await client.put('test', 'hello 1');
await client.disconnect();
} catch (e) {
console.log(e);
await client.disconnect();
}
I have tried setting CORS Allow all option on the server as well
Need to provide custom config.yaml to docker with following configurations
endpoints:
hotrod:
auth: false
enabled: false
qop: auth
serverName: infinispan
Unfortunately the nodejs client doesn't support authentication yet. The issue to implement this is https://issues.redhat.com/projects/HRJS/issues/HRJS-36
I am trying to setup AWS API Gateway to access a fargate container in a private VPC as described here. For this I am using AWS CDK as described below. But when I curl the endpoint after successful cdk deploy I get "Internal Server Error" as a response. I can't find any additional information. For some reason API GW can't reach the container.
So when I curl the endpoint like this:
curl - i https://xxx.execute-api.eu-central-1.amazonaws.com/prod/MyResource
... I get the following log output in cloud watch:
Extended Request Id: NpuEPFWHliAFm_w=
Verifying Usage Plan for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21. API Key: API Stage: ...
PI Key authorized because method 'ANY /MyResource/{proxy+}' does not require API Key. Request will not contribute to throttle or quota limits
Usage Plan check succeeded for API Key and API Stage ...
Starting execution for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21
HTTP Method: GET, Resource Path: /MyResource/test
Execution failed due to configuration error: There was an internal error while executing your request
CDK Code
First I create a network load balanced fargate service:
private setupService(): NetworkLoadBalancedFargateService {
const vpc = new Vpc(this, 'MyVpc');
const cluster = new Cluster(this, 'MyCluster', {
vpc: vpc,
});
cluster.connections.allowFromAnyIpv4(Port.tcp(5050));
const taskDefinition = new FargateTaskDefinition(this, 'MyTaskDefinition');
const container = taskDefinition.addContainer('MyContainer', {
image: ContainerImage.fromRegistry('vad1mo/hello-world-rest'),
});
container.addPortMappings({
containerPort: 5050,
hostPort: 5050,
});
const service = new NetworkLoadBalancedFargateService(this, 'MyFargateServie', {
cluster,
taskDefinition,
assignPublicIp: true,
});
service.service.connections.allowFromAnyIpv4(Port.tcp(5050));
return service;
}
Next I create the VpcLink and the API Gateway:
private setupApiGw(service: NetworkLoadBalancedFargateService) {
const api = new RestApi(this, `MyApi`, {
restApiName: `MyApi`,
deployOptions: {
loggingLevel: MethodLoggingLevel.INFO,
},
});
// setup api resource which forwards to container
const resource = api.root.addResource('MyResource');
resource.addProxy({
anyMethod: true,
defaultIntegration: new HttpIntegration('http://localhost.com:5050', {
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
}),
},
proxy: true,
}),
defaultMethodOptions: {
authorizationType: AuthorizationType.NONE,
},
});
resource.addMethod('ANY');
this.addCorsOptions(resource);
}
Anyone has a clue what is wrong with this config?
After hours of trying I finally figured out that the security groups do not seem to be updated correctly when setting up the VpcLink with CDK. Broadening the allowed connection with
service.service.connections.allowFromAnyIpv4(Port.allTraffic())
solved it. Still need to figure out which minimum set needs to be set instead of allTrafic()
Additionally I replaced localhost in the HttpIntegration by the endpoint of the load balancer like this:
resource.addMethod("ANY", new HttpIntegration(
'http://' + service.loadBalancer.loadBalancerDnsName,
{
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
})
},
}
))
I'd like to use Terraform to create AWS Cognito User Pool with one test user. Creating a user pool is quite straightforward:
resource "aws_cognito_user_pool" "users" {
name = "${var.cognito_user_pool_name}"
admin_create_user_config {
allow_admin_create_user_only = true
unused_account_validity_days = 7
}
}
However, I cannot find a resource that creates AWS Cognito user. It is doable with AWS Cli
aws cognito-idp admin-create-user --user-pool-id <value> --username <value>
Any idea on how to do it with Terraform?
In order to automate things, it can be done in terraform using a null_resource and local_exec provisioner to execute your aws cli command
e.g.
resource "aws_cognito_user_pool" "pool" {
name = "mypool"
}
resource "null_resource" "cognito_user" {
triggers = {
user_pool_id = aws_cognito_user_pool.pool.id
}
provisioner "local-exec" {
command = "aws cognito-idp admin-create-user --user-pool-id ${aws_cognito_user_pool.pool.id} --username myuser"
}
}
This isn't currently possible directly in Terraform as there isn't a resource that creates users in a user pool.
There is an open issue requesting the feature but no work has yet started on it.
As it is not possible to do that directly through Terraform in opposite to matusko solution I would recommend to use CloudFormation template.
In my opinion it is more elegant because:
it does not require additional applications installed locally
it can be managed by terraform as CF stack can be destroyed by terraform
Simple solution with template could look like below. Have in mind that I skipped not directly related files and resources like provider. Example also contains joining users with groups.
variables.tf
variable "COGITO_USERS_MAIL" {
type = string
description = "On this mail passwords for example users will be sent. It is only method I know for receiving password after automatic user creation."
}
cf_template.json
{
"Resources" : {
"userFoo": {
"Type" : "AWS::Cognito::UserPoolUser",
"Properties" : {
"UserAttributes" : [
{ "Name": "email", "Value": "${users_mail}"}
],
"Username" : "foo",
"UserPoolId" : "${user_pool_id}"
}
},
"groupFooAdmin": {
"Type" : "AWS::Cognito::UserPoolUserToGroupAttachment",
"Properties" : {
"GroupName" : "${user_pool_group_admin}",
"Username" : "foo",
"UserPoolId" : "${user_pool_id}"
},
"DependsOn" : "userFoo"
}
}
}
cognito.tf
resource "aws_cognito_user_pool" "user_pool" {
name = "cogito-user-pool-name"
}
resource "aws_cognito_user_pool_domain" "user_pool_domain" {
domain = "somedomain"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_group" "admin" {
name = "admin"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
user_init.tf
data "template_file" "application_bootstrap" {
template = file("${path.module}/cf_template.json")
vars = {
user_pool_id = aws_cognito_user_pool.user_pool.id
users_mail = var.COGNITO_USERS_MAIL
user_pool_group_admin = aws_cognito_user_group.admin.name
}
}
resource "aws_cloudformation_stack" "test_users" {
name = "${var.TAG_PROJECT}-test-users"
template_body = data.template_file.application_bootstrap.rendered
}
Sources
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpooluser.html
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudformation_stack
Example
Simple project based on:
Terraform,
Cognito,
Elastic Load Balancer,
Auto Scaling Group,
Spring Boot application
PostgreSQL DB.
Security check is made on ELB and Spring Boot.
This means that ELB can not pass not authorized users to application. And application can do further security check based on PostgreSQL roleswhich are mapped to Cognito roles.
Terraform Project and simple application:
https://github.com/test-aws-cognito
Docker image made out of application code:
https://hub.docker.com/r/testawscognito/simple-web-app
More information how to run it in terraform git repository's README.MD.
It should be noted that the aws_cognito_user resource is now supported in the AWS Terraform provider, as documented here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_user
Version 4.3.0 at time of writing.
i try to run my first deepstream.io server from this link but i get this error :
error:
CONNECTION_ERROR | Error: listen EADDRINUSE 127.0.0.1:3003
PLUGIN_ERROR | connectionEndpoint wasn't initialised in time
f:\try\deep\node_modules\deepstream.io\src\utils\dependency-
initialiser.js:96
throw error
^
Error: connectionEndpoint wasn't initialised in time
at DependencyInitialiser._onTimeout
(f:\try\deep\node_modules\deepstream.io\src\utils\dependency-
initialiser.js:94:17)
at ontimeout (timers.js:386:14)
at tryOnTimeout (timers.js:250:5)
at Timer.listOnTimeout (timers.js:214:5)
and this is my code:
const DeepStreamServer = require("deepstream.io")
const C = DeepStreamServer.constants;
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
server.start();
In deepstream 3.0 we released our HTTP endpoint, by default this runs alongside our websocket endpoint.
Because of this, passing the port option at the root level of the config no longer works (it overrides both the HTTP and websocket port options, as you can see in the screen capture provided, both endpoints are trying to start on the same port).
You can override each of these ports as follows:
const deepstream = require('deepstream.io')
const server = new deepstream({
connectionEndpoints: {
http: {
options: {
port: ...
}
},
websocket: {
options: {
port: ...
}
}
}
})
server.start()
Or you can define your config in a file and point to that while initialising deepstream[1].
[1] deepstream server configuration
One solution that i find is passing empty config object so inseted of :
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
i'm just using this :
const server = new DeepStreamServer({})
and now everything work's well.
All the bellow is for Version 4.2.2 (last version by now)
I was having the same Port in use or config file not found errors. And i was using typescript and i didn't pay attention too to the output dir and build (which can be a problem when one use typescript and build). I was able to run the server in the end. And i had a lot of analysis.
I checked up the code source and i have seen how the config is loaded
const SUPPORTED_EXTENSIONS = ['.yml', '.yaml', '.json', '.js']
const DEFAULT_CONFIG_DIRS = [
path.join('.', 'conf', 'config'), path.join('..', 'conf', 'config'),
'/etc/deepstream/config', '/usr/local/etc/deepstream/config',
'/usr/local/etc/deepstream/conf/config',
]
DEFAULT_CONFIG_DIRS.push(path.join(process.argv[1], '..', 'conf', 'config'))
DEFAULT_CONFIG_DIRS.push(path.join(process.argv[1], '..', '..', 'conf', 'config'))
Also i tested different things and all. Here what i came up with:
First of all if we don't precise any parameter in the constructor. A config from the default directories will get to load. If there isn't then the server fail to run.
And one of the places where we can put a config is ./conf in the same folder as the server node script.
Secondly we can precise a config as a string path (parameter in the constructor). config.yml or one of the supported extensions. That will allow the server to load the server config + the permission.yml and users.yml configs. Which too are supposed to be in the same folder. If not in the same folder there load will fail, and therefor the permission plugin will not load. And so does the users config. And no fall back to default will happen.
Thirdly the supported extensions for the config files are: yml, yaml, json, js.
In nodejs context. If nothing precised. There is no fallback to some default config. The config need to be provided in one of the default folders, or by precising a path to it. Or by passing a config object. And all the optional options will default to some values if not provided ( a bit bellow there is an example that can show that ). Know however that precising an end point is very important and required.
To precise the path, we need to precise the path to the config.yml file (the server config) [example: path.join(__dirname, './conf/config.yml')]. Then from the same dir permission.yml and users.yml will be retrieved (the extension can be any of the supported one). We can not precise a path to a directory, it will fail.
We can precise the path to permission config or user config separatly within config.yaml as shown bellow:
# Permissioning example with default values for config-based permissioning
permission:
type: config
options:
path: ./permissions.yml
maxRuleIterations: 3
cacheEvacuationInterval: 60000
Finally we can pass an object to configure the server, or by passing null as a parameter and use .set methods (i didn't test the second method). For configuring the server we need to follow the same structure as the yml file. With sometimes a bit different naming. The typescript declaration files or types show us the way. With an editor like vscode. Even if we are not using typescript we can keep get the auto completion and type definitions.
And the simplest for equivalent to the previous version is :
const webSocketServer = new Deepstream({
connectionEndpoints: [
{
type: 'ws-websocket',
options: {
port: 6020,
host: '127.0.0.1',
urlPath: '/deepstream'
}
}
]
});
webSocketServer.start();
the above is the new syntax and way.
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
^^^^^^^ is completely deprecated and not supported in version 4 (the doc is not updated).