I wrote a custom module for ansible that call REST API, and I pass the parameters (base_url and api_key) with parameters defined in the playbook.
That's fine, but I would like to store somewhere these two parameters so that they have not to be defined in the playbook.
Is there an official way to do that ?
Thanks for your answers, but I need to be more clear with an example.
Here is a task
tasks:
- name: Check if all files are present
Full_case1:
param1: 'xxxxx'
param2: 'xxxxxxxx'
base_url: '{{ my base url }}'
api_key: '{{ my api key }}'
I would like to suppress base_url and api_key from the params to call the module.
Thanks a lot !
Related
I am using Vercel Deployments with a NextJS app. Deployments are automatically run when I push to master, but I don't want to store keys in GitHub. My serverless functions are reliant on my database. When run locally, I can simply use Google's Default Authentication Credentials, but this is not possible when deployed to Vercel. As such, I created a Service Account to enable the server to have access.
How do I load the service account credentials without pushing they key itself to GitHub?
I tried adding the key as described in this issue, but that didn't work.
AFAIK setting an environment variable in Vercel is not helpful because Google environment variables require a path/JSON file (vs. simply text).
Rather than using a path to a JSON file, you can create an object and include environment variables as the values for each object key. For example:
admin.initializeApp({
credential: admin.credential.cert({
client_email: process.env.FIREBASE_CLIENT_EMAIL,
private_key: process.env.FIREBASE_PRIVATE_KEY,
project_id: 'my-project'
}),
databaseURL: 'https://my-project.firebaseio.com'
});
Then, you can add the environment variables inside your project settings in Vercel.
Adding to #leerob's answer, I found that putting quotes around the FIREBASE_PRIVATE_KEY environment variable in my .env file fixed an error I kept getting relating to the PEM file when making a request. I didn't need any quotes around the key for calls to the standard firebase library though.
This was the config I used to access the Google Cloud Storage API from my app:
const { Storage } = require('#google-cloud/storage');
const storage = new Storage({ projectId: process.env.FIREBASE_PROJECT_ID,
credentials: { client_email: process.env.FIREBASE_CLIENT_EMAIL,
private_key: process.env.FIREBASE_PRIVATE_KEY_WITH_QUOTES
}
})
I had this problem too but with google-auth-library. Most of Googles libraries provide a way to add credentials through a options object that you pass when initializing it. To be able to get information from Google Sheets or Google Forms you can for example do this:
const auth = new GoogleAuth({
credentials:{
client_id: process.env.GOOGLE_CLIENT_ID,
client_email: process.env.GOOGLE_CLIENT_EMAIL,
project_id: process.env.GOOGLE_PROJECT_ID,
private_key: process.env.GOOGLE_PRIVATE_KEY
},
scopes: [
'https://www.googleapis.com/auth/someScopeHere',
'https://www.googleapis.com/auth/someOtherScopeHere'
]
});
You can just copy the info from your credentials.json file to the corresponding environment variables. Just take care that when your working on localhost you will need to have the private_key in double quotes but when you put it into Vercel you should not include the quotes.
The Serverless framework has made it very easy for developers to create an API gateway connected to a lambda function. like this
hello:
name: hello-handler
description: blablabla
handler: /lambda-functions/hello-handler.handler
role: HelloRole
package:
include:
- lambda-functions/hello-handler.js
events:
- http: GET hello
My question is how can I change the name of the API gateway that is going to be created?
Based on the doc, this should do the trick.
provider:
...
apiName: custom-api-name # Use a custom name for the API Gateway API
I'm working on a web service in which one Lambda function serves requests from a web browser. This request handling kicks off some slow work that can be completed asynchronously, so I have a separate Lambda function that I want to invoke asynchronously to handle the slow work.
This is being deployed as a Serverless project. The serverless.yml file looks like this:
service: AsyncService
frameworkVersion: '=1.54.0'
provider:
name: aws
runtime: go1.x
package:
exclude:
- ./**
include:
- ./bin/**
functions:
FrontEnd:
handler: bin/FrontEnd
events:
- http:
path: processData
method: post
environment:
AsyncWorkerARN: ???
AsyncWorker:
handler: bin/AsyncWorker
The question is how can I get the ARN of the AsyncWorker Lambda function into an environment variable of the FrontEnd Lambda function without hardcoding it? I need it there to be able to invoke the AsyncWorker Lambda.
I think the best way is to use the serverless-pseudo-parameters plugin and then do something like: arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:AsyncService-dev-AsyncWorker
I understand that I need to specify a request template for the API gateway in order to gain access to the request headers. The Serverless docs say:
"Serverless ships with the following default request templates you can use out of the box:"
The default templates look like they provide access to what I want (i.e. request headers), but how do you tell Serverless to use them?
The "default request templates you can use out of the box" are referring to a lambda integration, not a "default" integration, where you leave the parameter blank. If no integration is defined, then it is the default integration. So, under http, add "integration: lambda".
However, that being said, you should still have access to the headers when you do not specify the integration.
Lambda Integration
https://serverless.com/framework/docs/providers/aws/events/apigateway/#example-lambda-event-before-customization
functions:
create:
handler: posts.create
events:
- http:
path: posts/create
method: post
integration: lambda
Default Integration
https://serverless.com/framework/docs/providers/aws/events/apigateway/#example-lambda-proxy-event-default
functions:
index:
handler: handler.hello
events:
- http: GET hello
I am using fine uploader to uploaded files to S3.
Based on my experience, fine uploader forces to hard code s3 bucket name in the java script itself or it may be my misundestanding!. My challenge is that I have different bucket per environment. does that mean I have to use separate java script (like below) per environment such local, dev,test,etc? Is there any option in which I can pass bucket name from the server side configuration ?
Local
$('#fine-uploader').fineUploaderS3({
template: 'qq-template',
autoUpload: false,
debug: true,
request: {
endpoint: "http://s3.amazonaws.com/bucket_local",
accessKey: "AKxxxxxxxxxxBIA",
},
)}
Dev
$('#fine-uploader').fineUploaderS3({
template: 'qq-template',
autoUpload: false,
debug: true,
request: {
endpoint: "http://s3.amazonaws.com/bucket_dev",
accessKey: "AKxxxxxxxxxxBIA",
},
)}
Test
$('#fine-uploader').fineUploaderS3({
template: 'qq-template',
autoUpload: false,
debug: true,
request: {
endpoint: "http://s3.amazonaws.com/bucket_test",
accessKey: "AKxxxxxxxxxxBIA",
},
)}
Based on my experience, fine uploader forces to hard code s3 bucket name in the java script itself or it may be my misundestanding!
Yes, this is definitely a misunderstanding.
Normally, you would specify your bucket as a URL via the request.endpoint option. You can specify a default value here and then override it at almost any time, for all subsequent files or one or more specific files, via the setEndpoint API method. You can call this method from, for example, an onSubmit callback handler, and even delegate to your server for the bucket by returning a Promise in your callback handler and resolving the Promise once your have called setEndpoint.
Fine Uploader attempts to determine the bucket name from the request.endpoint URL. If this is not possible (such as if you are using a CDN as an endpoint), you will need to supply the bucket name via the objectProperties.bucket option. This too can be dynamically updated, as the option value may be a function, and you may even return a Promise from this function if you would like to delegate to an async task to determine the bucket name (such as if you need to get the bucket name from a server endpoint using an ajax call). Fine Uploader S3 will call your bucket function before it attempts to upload each file, passing the ID of the file to your function.
I set s3_url as hidden value in html, and value set at server based on environment config.
request: {
endpoint: $('#s3_url').val()
}