serverless framework and use key management service (KMS) - serverless-framework

Basically I use serverless framework serverless for a function that allows me to send/receive emails through mailgun.
For this I have a config.js file set up in my serverless folder.
This config.js contains all my API keys, email address, login, etc for my “mailgun” function.
I want to use Google Cloud KMS to encrypt the resource config.js, because I am afraid my senstive data gets stolen and misused.
The encrypted file is config.js.enc. google_key_management_service
But serverless deploy does not decrypt my config.js.enc. It throws me a resource/syntax error…
Any solutions/ideas how I can make KMS work for my config.js file in my serverless framework?
I added AWS tags also, because they have a similar KMS as Google Cloud. But actually I think the real issue is with the serverless framework and to make encrypted files work in deploying serverless with sls deploy command, but I could be mistaken.

The serverless framework appears to include native AWS SSM integrations:
functions:
myfunc:
# other config
environment:
TWITTER_ACCESS_TOKEN: ${ssm:myFunc}
However, as you noted, there's no similar functionality on GCP, so you'll need to roll some of this on your own. You may be interested in some of the strategies outlined in Secrets in Serverless:
Do you need the secrets?
It's always important to ask - do I actually need these secrets? Could you leverage Cloud Provider IAM (or even cross-cloud OIDC) instead of injecting secrets into my application? Where possible, try to leverage the IAM solution provided by the various clouds. Obviously there are still quite a few cases where a secret is required.
Encrypted environment variables
Before the function is launched, you encrypt the plaintext secrets locally into ciphertext (encrypted strings). Here's an example with gcloud, but you can also use the API or other tools like HashiCorp Vault:
$ gcloud kms encrypt \
--ciphertext-file=- \
--plaintext-file=/path/to/my/secret \
--key=my-kms-key \
--key-ring=my-kms-keyring \
--location=us-east4 \
| base64
This will output a base64-encoded, encrypted string, which you then store in your config.js:
CiQAePa3VBJLbunLSqIJT+RS4nYiKdIaW6U69Y...
On startup, configure your application to:
Base64 decode the string
Decrypt the ciphertext using Cloud KMS
Store the plaintext in-memory for as long as the secret is needed
I'm not sure what language(s) you are using, but here's a nodejs sample. You can find a lot more samples on GitHub at sethvargo/secrets-in-serverless:
const cryptoKeyID = process.env.KMS_CRYPTO_KEY_ID;
const kms = require('#google-cloud/kms');
const client = new kms.v1.KeyManagementServiceClient();
let username;
client.decrypt({
name: cryptoKeyID,
ciphertext: process.env.DB_USER,
}).then(res => {
username = res[0].plaintext.toString().trim();
}).catch(err => {
console.error(err);
});
let password;
client.decrypt({
name: cryptoKeyID,
ciphertext: process.env.DB_PASS,
}).then(res => {
password = res[0].plaintext.toString().trim();
}).catch(err => {
console.error(err);
});
exports.F = (req, res) => {
res.send(`${username}:${password}`)
}
Google Cloud Storage
Since you're on GCP, another option is to use Google Cloud Storage (GCS) directly to store the secrets. This would remove your coupling from the serverless framework.
Make a bucket:
$ gsutil mb gs://${GOOGLE_CLOUD_PROJECT}-serverless-secrets
Make the bucket private:
$ gsutil defacl set private gs://${GOOGLE_CLOUD_PROJECT}-serverless-secrets
$ gsutil acl set -r private gs://${GOOGLE_CLOUD_PROJECT}-serverless-secrets
Write some secrets into the bucket. Even though they are being committed as plaintext, they are encrypted at rest, and access is tightly controlled via IAM.
$ gsutil -h 'Content-Type: application/json' cp - gs://${GOOGLE_CLOUD_PROJECT}-serverless-secrets/app1 <<< '{"username":"my-user", "password":"s3cr3t"}'
Then create a service account which has permission to read from the bucket, and assign that service account to your functions.
Finally, read from the bucket at function start (Python example this time):
import os
import json
from google.cloud import storage
blob = storage.Client() \
.get_bucket(os.environ['STORAGE_BUCKET']) \
.get_blob('app1') \
.download_as_string()
parsed = json.loads(blob)
username = parsed['username']
password = parsed['password']
def F(request):
return f'{username}:{password}'

Related

Can I access Google Secrets Manager secrets externally

I am writing an app that I will be hosting in Google Cloud Functions with some config stored in Secrets Manager. I would like to share this information with another node app that is running on my local machine. Is this possible? I have tried using the npm package but I can’t figure out how I can authenticate to get access to the manager.
I am using a service key to access firestore:
import { initializeApp } from "firebase/app";
import { getFirestore } from "firebase/firestore";
const service_key = {
apiKey: myKey,
authDomain: "my-poroject.firebaseapp.com",
projectId: "my-poroject",
storageBucket: "my-poroject.appspot.com",
messagingSenderId: "0123456789",
appId: "0:00000000:web:00000000000000"
}
const app = initializeApp(service_Key);
export const db = getFirestore(app);
This all works perfectly, but I can't see how I would apply the key or 'app' when using secret manager:
const {SecretManagerServiceClient} = require('#google-cloud/secret-manager');
const client = new SecretManagerServiceClient();
As public cloud provider, most of the Google Cloud services are publicly accessible. So YES, you can access the secret from outside.
However, you must have the required credentials and permissions to access the secrets.
You can use a service account key file, which is also a secret (and I never recommend that option, but in some cases, it's useful), to generate an access token and to query safely secret manager. The problem is the service account key file, it's a secret to protect secret... The security level depends on your external platform.
You can also have a look to Identity Federation Pool that can help you to use your already known identity and to be transparently authenticated on Google Cloud. It's very powerful and you no longer need secret on your side and you increase your security posture.

AWS SAM: How to initialize S3Client credentials from a lambda function handler

I am building an HTTP API using sam local start-api. Each endpoint of this API is mapped to a lambda handler I have written in Javascript code.
One of these lambda handlers requires to download and upload files from S3, for which I am using this S3Client from #aws-sdk/client-s3. I have tried to initialize such client as follows:
const s3Client = new S3Client({
region: "eu-west-1"
});
expecting that it reads the credentials from my ~/.aws/credentials file, but it does not. All operations via this client fail due to lack of permissions.
I would like to know what is the correct way of using this S3Client from within a lambda handler that I am testing locally using sam local.
If you're not using the default profile in your AWS credentials file, Sam CLI commands have a --profile option to specify which profile to use.
For example:
sam local start-api --profile my_profile
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-start-api.html

How do I add Google Application Credentials/Secret to Vercel Deployment?

I am using Vercel Deployments with a NextJS app. Deployments are automatically run when I push to master, but I don't want to store keys in GitHub. My serverless functions are reliant on my database. When run locally, I can simply use Google's Default Authentication Credentials, but this is not possible when deployed to Vercel. As such, I created a Service Account to enable the server to have access.
How do I load the service account credentials without pushing they key itself to GitHub?
I tried adding the key as described in this issue, but that didn't work.
AFAIK setting an environment variable in Vercel is not helpful because Google environment variables require a path/JSON file (vs. simply text).
Rather than using a path to a JSON file, you can create an object and include environment variables as the values for each object key. For example:
admin.initializeApp({
credential: admin.credential.cert({
client_email: process.env.FIREBASE_CLIENT_EMAIL,
private_key: process.env.FIREBASE_PRIVATE_KEY,
project_id: 'my-project'
}),
databaseURL: 'https://my-project.firebaseio.com'
});
Then, you can add the environment variables inside your project settings in Vercel.
Adding to #leerob's answer, I found that putting quotes around the FIREBASE_PRIVATE_KEY environment variable in my .env file fixed an error I kept getting relating to the PEM file when making a request. I didn't need any quotes around the key for calls to the standard firebase library though.
This was the config I used to access the Google Cloud Storage API from my app:
const { Storage } = require('#google-cloud/storage');
const storage = new Storage({ projectId: process.env.FIREBASE_PROJECT_ID,
credentials: { client_email: process.env.FIREBASE_CLIENT_EMAIL,
private_key: process.env.FIREBASE_PRIVATE_KEY_WITH_QUOTES
}
})
I had this problem too but with google-auth-library. Most of Googles libraries provide a way to add credentials through a options object that you pass when initializing it. To be able to get information from Google Sheets or Google Forms you can for example do this:
const auth = new GoogleAuth({
credentials:{
client_id: process.env.GOOGLE_CLIENT_ID,
client_email: process.env.GOOGLE_CLIENT_EMAIL,
project_id: process.env.GOOGLE_PROJECT_ID,
private_key: process.env.GOOGLE_PRIVATE_KEY
},
scopes: [
'https://www.googleapis.com/auth/someScopeHere',
'https://www.googleapis.com/auth/someOtherScopeHere'
]
});
You can just copy the info from your credentials.json file to the corresponding environment variables. Just take care that when your working on localhost you will need to have the private_key in double quotes but when you put it into Vercel you should not include the quotes.

How To Add 'Authorization' Header to S3.upload() Request?

My code uses the AWS Javascript SDK to upload to S3 directly from a browser. Before the upload happens, my server sends it a value to use for 'Authorization'.
But I see no way in the AWS.S3.upload() method where I can add this header.
I know that underneath the .upload() method, AWS.S3.ManagedUpload is used but that likewise doesn't seem to return a Request object anywhere for me to add the header.
It works successfully in my dev environment when I hardcode my credentials in the S3() object, but I can't do that in production.
How can I get the Authorization header into the upload() call?
Client Side
this posts explains how to post from a html form with a pre-generated signature
How do you upload files directly to S3 over SSL?
Server Side
When you initialise the S3, you can pass the access key and secret.
const s3 = new AWS.S3({
apiVersion: '2006-03-01',
accessKeyId: '[value]',
secretAccessKey: '[value]'
});
const params = {};
s3.upload(params, function (err, data) {
console.log(err, data);
});
Reference: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html
Alternatively if you are running this code inside AWS services such as EC2, Lambda, ECS etc, you can assign a IAM role to the service that you are using. The permissions can be assigned to the IAM Role
I suggest that you use presigned urls.

Using Hashicorp Packer with Vault Secret Engine KV2

I'm trying to use packer with vault secret engine kv2, but so far I'm hitting an auth/permission error. I'm trying to read a secret from vault, as shown in the examples. In my test.json file I have a variables object, and inside I have an access_key and a secret_key keys. Each one of those contain {{ vault/secret/data/Foo/testaccess_key}}.
"variables": {
"access_key": "{{ vault `/secret/data/Foo/test` `access_key`}}",
"secret_key": "{{ vault `/secret/data/Foo/test` `secret_key`}}"
}
In vault, I created a token (which I use with packer), and the token has a policy such that:
path "secret/*" {
capabilities = ["list"]
}
path "secret/data/Foo/test" {
capabilities = ["read"]
}
According to docs, this should be enough for packer to be able to read the secret, but when I run packer I get
Error initializing core: error interpolating default value for 'access_key':
template: root:1:3: executing "root" at <vault `/secret/data/...>:
error calling vault: Error reading vault secret: Error making API request.
Permission denied.
URL: GET
https://vault.*******.com/v1/secret/data/Foo/test
Code: 403. Errors:
* 1 error occurred:
* permission denied
If I understand correctly, the cause of the problem is the policy not granting enough permissions to packer in order to allow it to read my secret. Am I right? If "yes", how should I modify my policy?
Try something like this for your Packer token policy (don't forget to remake the token with the new policy, you can't update policies on preexisting tokens):
path "secret/*" {
capabilities = ["list"]
}
path "secret/data/Foo/*" {
capabilities = ["read"]
}
I've been in the process of learning Vault and have found that whenever I specifically hardcode any path in a policy, to a particular secret, I run into the same error. Hopefully this helps you out. This guide details how to use AppRole authentication with tokens, it may help.