How can I add the current service endpoint to the deployed lambda environment - serverless-framework

Reading here you can reference cloud formation variables to set the environment, like
environment:
BASE_URL: ${cf:${self:service}-${self:provider.stage}.ServiceEndpoint, 'reinstall'}
Unfortunately, the CF vars are not set on the initial install, so without the default value here, the serverless deploy here will fail, and with it the initial install will not have the correct endpoint value -- only a subsequent install will.
Is there a way to configure this to work on the initial install or after a serverless remove, in the serverless.yml? Is there some plugin that allows updating the environment as part of some postinstall step? Alternatively, does the handler have the service endpoint available somewhere at run time?

Related

Configure allowed_pull_policies on shared GitLab runner

I'm using GitLab.com's managed CI runners, and I'd like to run my CI jobs using the if-not-present pull policy to avoid the extra minutes it takes to pull the image for each job. Trying to set that value in the .gitlab-ci.yml file gives me this error:
pull_policy ([if-not-present]) defined in GitLab pipeline config is not one of the allowed_pull_policies ([always])
This led me to the config.toml settings for restricting Docker pull policies, so I created a config.toml file at the root of my repository and tried that. However, I still get the same error.
Is config.toml only available for manual/self-hosted runners? Is there any other way to get past this?
Context
Image selection in .gitlab-ci.yml:
default:
image:
name: registry.gitlab.com/myorg/myrepo/ci/builder:latest
pull_policy: if-not-present
Contents of config.toml:
[[runners]]
executor = "docker"
[runners.docker]
pull_policy = ["if-not-present"]
allowed_pull_policies = ["always", "if-not-present"]
First of all, the config.toml file is not meant to be in your repo but on the runner machine (or container).
But anyways, the always pull policy should not cause image pulls to last minutes if the layers are already cached locally: it just ensures you have the latest version by checking the metadata. If the pulls take minutes, it means that either the layers are not available locally, or the image was actually updated (or that the connection to your container registry is so incredibly slow that just checking the metadata takes minutes, but that is unlikely).
It is very possible that Gitlab's managed runners do not have a way to locally cache layers, and thus there would be no practical difference between the always and if-not-present policies. For instance if you use Gitlab Saas:
A dedicated temporary runner VM hosts and runs each CI job.
(see https://docs.gitlab.com/ee/ci/runners/index.html)
Thus the downloaded layers are discarded as soon as the job finishes.

setting NODE_EXTRA_CA_CERTS with dotenv does not work as an export

I feel puzzled by the following behavior. In the very beginning of my main index.js, I am using
require('dotenv').config();
console.log(process.env); // everything seems in order
I know that the rest of my code successfully access all the relevant process.env.${VARS}. However, I get SSL exceptions; exceptions that I can easily solve by
export NODE_EXTRA_CA_CERTS=/some/absolute/path/to/ca.pem
npm start
Is there something special about NODE_EXTRA_CA_CERTS that would explained why this specific variable set with require('dotenv').config() does not work while the others work like a charm?
Does it need to be set before running npm? If it does, why is it the case and are there any workaround so I could keep thing simple?
environement:
dotenv 16.0.0
node v16.13.2
neardupe How to properly configure node.js to use Self Signed root certificates? .
Your problem is not in npm. npm start runs your application, typically (but not necessarily) by running node (or whatever spelling on your platform) to run your js code. When you use node to run js, NODE_EXTRA_CA_CERTS is read and saved in the C-code part of node at startup, before beginning to execute js, and subsequent changes in js variables like process.env do not affect it.
The clean way to do this in js is to pass the desired CAlist -- which can consist of the standard list (from tls.rootCertificates) plus any additions (or replacements or deletions) you choose -- in the (relevant) TLS socket creation, or any https request that implicitly creates a TLS socket; or alternatively to use --use-openssl-ca and select an OpenSSL-format store provided by your system (modified if necessary by system means like update-ca-certificates on Debian/Ubuntu) or one you create.
Or when using npm as you do, it should be possible to configure your package.json to set the envvar before running the application in node.
If you can't do either/any of those, especially where you control the toplevel (and startup) but call libraries you can't [safely] change, see the Q I linked above. For https connections that use the default https.globalAgent you can (documentedly) set that per the A. For all connections, you can monkeypatch tls.createSecureContext to use the undocumented context.addCACert as in the Q, which OP confirmed in the A does actually work if using a correct cert.

In what order is the serverless file evaluated?

I have tried to find out in what order the statements of the serverless file are evaluated (maybe it is more common to say that 'variables are resolved').
I haven't been able to find any information about this and to some extent it makes working with serverless feel like a guessing game for me.
As an example, the latest surprise I got was when I tried to run:
$ sls deploy
serverless.yaml
useDotenv: true
provider:
stage: ${env:stage}
region: ${env:region}
.env
region=us-west-1
stage=dev
I got an error message stating that env is not available at the time when stage is resolved. This was surprising to me since I have been able to use env to resolve other variables in the provider section, and there is nothing in the syntax to indicate that stage is resolved earlier.
In what order is the serverless file evaluated?
In effect you've created a circular dependency. Stage is special because it is needed to identify which .env file to load. ${env:stage} is being resolved from ${stage}.env, but Serverless needs to know what ${stage} is in order to find ${stage}.env etc.
This is why it's evaluated first.
Stage (and region, actually) are both optional CLI parameters. In your serverless.yml file what you're setting is a default, with the CLI parameter overriding it where different.
Example:
provider:
stage: staging
region: ca-central-1
Running serverless deploy --stage prod --region us-west-2 will result in prod and us-west-2 being used for stage and region (respectively) for that deployment.
I'd suggest removing any variable interpolation for stage and instead setting a default, and overriding via CLI when needed.
Then dotenv will know which environment file to use, and complete the rest of the template.

what is the use of custom-artifact in spinnaker, it always gives error - Custom references are passed on to cloud platforms to handle or process 500

i am trying to use custom-artifact account in spinnaker.
I have a pipeline, where i want to pull a http file (a deployment manifest) as an artifact, and use it in deployment.
i use custom-artifact and put the url - (https://raw.githubusercontent.com/sdputurn/flask-k8s-inspector/master/Deployment.yaml) in reference.
I have tried running this pipeline multiple times, but i always fails with the error (Internal Server Error",“message”:“Custom references are passed on to cloud platforms to handle or process”,“status”:500)
i saw some tutorials where they just use custom artifact and put some http url to get files for deploy stage.
steps to re-produce:
1. create a new pipeline --> in configuration stage --> add artifact --> choose "custom-artifact" --> update reference with (https://raw.githubusercontent.com/sdputurn/flask-k8s-inspector/master/Deployment.yaml) --> check "use default artifact" and fill the same details -- > add one more stage Deploy --> use the artifact template from configuration stage --> run the pipeline
spinnaker version - 1.16.1
For the Spinnaker version 1.17.1 the custom-artifact is deprecated. If possible use the embedded-artifact>produce an artifact and use the artifact in another execution.

Can't deploy using serverless framework from Windows 10

Attempt to deploy via serverless framework using Windows 10 fails:
C:\Users\xxxxxx>sls deploy --verbose Serverless: Packaging
service... Serverless: Excluding development dependencies...
Error --------------------------------------------------
EPERM: operation not permitted, scandir
'C:\Users\xxxxxx\AppData\Local\ElevatedDiagnostics' For debugging
logs, run again after setting the "SLS_DEBUG=*" environment variable.
Your Environment Information ----------------------------- OS: win32
Node Version: 6.11.2 Serverless Version: 1.19.0
Tried again with command prompt under elevated privileges:
EBUSY: resource busy or locked, scandir
'C:\Users\xxxxxx\AppData\Local\Microsoft\InputPersonalization\TextHarvester\WaitList.dat'
I assumed there was a permissions issue at first so I retried with the command prompt at full admin mode but just ran into the the second error. My research suggested an issue with windows search so I turned it off (and also all background apps). Trying again (and again) I just ran into more similar issues and am unable to deploy anything. Anyone had similar issues and found a way around them?
I worked it out finally, so in case anyone else encounters this issue here is a summary. There seem to be 2 issues:
Don't create functions in your root folder. Create a specific folder for your serverless function i.e. not in C:\Users\nnnnnn> but within your regular document storage. In Windows 10 it works nicely if you use a OneDrive folder, with the benefit that your function(s) are also then replicated to other dev machines that you might use (and are automatically backed up offsite).
More importantly, the serverless framework seems to have an issue if you attempt to deploy to a region other than the default region set in your aws CLI configuration. I've no idea why this should be since the credentials I use with the AWS CLI are authorised for all regions. I also have no idea why the issue should result in serverless attempting to access a whole series of windows files for which it has no authority but nevertheless...
In my case, I primarily use region ap-southeast-2. By default, SLS CREATE generates a serverless.yml using a default US region. If this is left as-is, there is then a mismatch between the deployment region and your AWS CLI region. Not good. To avoid the minor pain of having to specify a deployment region in the SLS deploy command, just update the deployment region in the serverless.yml file to match the CLI region.
Now works a treat...