In what order is the serverless file evaluated? - serverless-framework

I have tried to find out in what order the statements of the serverless file are evaluated (maybe it is more common to say that 'variables are resolved').
I haven't been able to find any information about this and to some extent it makes working with serverless feel like a guessing game for me.
As an example, the latest surprise I got was when I tried to run:
$ sls deploy
serverless.yaml
useDotenv: true
provider:
stage: ${env:stage}
region: ${env:region}
.env
region=us-west-1
stage=dev
I got an error message stating that env is not available at the time when stage is resolved. This was surprising to me since I have been able to use env to resolve other variables in the provider section, and there is nothing in the syntax to indicate that stage is resolved earlier.
In what order is the serverless file evaluated?

In effect you've created a circular dependency. Stage is special because it is needed to identify which .env file to load. ${env:stage} is being resolved from ${stage}.env, but Serverless needs to know what ${stage} is in order to find ${stage}.env etc.
This is why it's evaluated first.
Stage (and region, actually) are both optional CLI parameters. In your serverless.yml file what you're setting is a default, with the CLI parameter overriding it where different.
Example:
provider:
stage: staging
region: ca-central-1
Running serverless deploy --stage prod --region us-west-2 will result in prod and us-west-2 being used for stage and region (respectively) for that deployment.
I'd suggest removing any variable interpolation for stage and instead setting a default, and overriding via CLI when needed.
Then dotenv will know which environment file to use, and complete the rest of the template.

Related

Change environment variable value of react app using azure devops - Deployment on aws (s3)

I have a react application in which they are getting backend api address by using Environment variable. Below in the example:
this._baseUrl = process.env.API_GATEWAY;
In local development environment, development team create .env. file and set environment variable value in that file, to call backend api and every things work fine, like below.
API_GATEWAY=http://localhost:3000
When i create CI/CD pipeline for same project then every things works fine and application is also successfully deployed on AWS (s3 bucket) but i am not able to change the value of environment variable while building the project using npm, like below:
- script: |
npm run build
displayName: 'npm build'
env:
API_GATEWAY: $(envAppApi)
API_GATEWAY used above is the name of environment variable used in code and $(envAppApi) is variable defined in variable group.
But when application is deployed on AWS then environment variable value not changed and it shows below error.
mutation.js:106 ReferenceError: process is not defined
at new e (http-api.ts:17:42)
at Function.value (http-api.ts:24:12)
at Object.mutationFn (Auth.ts:13:26)
at Object.fn (mutation.js:132:31)
at c (retryer.js:95:31)
at new u (retryer.js:156:3)
at t.executeMutation (mutation.js:126:20)
at mutation.js:86:20
(http-api.ts:17:42) => This is the same line where API_GATEWAY environment variable is set and already showed above.
Problem statement:
Is there is any way that we can update the value of environment variable while creating CI/CD pipeline? so the application run successfully. Thanks.
Note: I don't want to use .env. file in my pipeline for updating environment values in react application.
Is there is any way that we can update the value of environment variable while creating CI/CD pipeline?
Yes. I suggest that you can use RegEx Match & Replace task from RegEx Match & Replace.
This task will use regular expressions to match fields in the file.
Here is an example:
steps:
- task: RegExMatchReplace#2
displayName: 'RegEx Match & Replace'
inputs:
PathToFile: test.js
RegEx: 'this._baseUrl = ([a-zA-Z]+(\.[a-zA-Z]+)+)_[a-zA-Z]+;'
ValueToReplace: ' this._baseUrl = $(envAppApi)'
Then the value will update.
You can use this site to convert the regular expressions : Regex Generator

How can I add the current service endpoint to the deployed lambda environment

Reading here you can reference cloud formation variables to set the environment, like
environment:
BASE_URL: ${cf:${self:service}-${self:provider.stage}.ServiceEndpoint, 'reinstall'}
Unfortunately, the CF vars are not set on the initial install, so without the default value here, the serverless deploy here will fail, and with it the initial install will not have the correct endpoint value -- only a subsequent install will.
Is there a way to configure this to work on the initial install or after a serverless remove, in the serverless.yml? Is there some plugin that allows updating the environment as part of some postinstall step? Alternatively, does the handler have the service endpoint available somewhere at run time?

Gitlab-CI: AWS S3 deploy is failing

I am trying to create a deployment pipeline for Gitlab-CI on a react project. The build is working fine and I use artifacts to store the dist folder from my yarn build command. This is working fine as well.
The issue is regarding my deployment with command: aws s3 sync dist/'bucket-name'.
Expected: "Done in x seconds"
Actual:
error Command failed with exit code 2. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running after_script 00:01 Uploading artifacts for failed job 00:01 ERROR: Job failed: exit code 1
The files seem to have been uploaded correctly to the S3 bucket, however I do not know why I get an error on the deployment job.
When I run the aws s3 sync dist/'bucket-name' locally everything works correctly.
Check out AWS CLI Return Codes
2 -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command line failed to be parsed. Parsing failures can be caused by, but are not limited to, missing any required subcommands or arguments or using any unknown commands or arguments. Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to s3 commands. It can mean at least one or more files marked for transfer were skipped during the transfer process. However, all other files marked for transfer were successfully transferred. Files that are skipped during the transfer process include: files that do not exist, files that are character special devices, block special device, FIFO's, or sockets, and files that the user cannot read from.
The second paragraph might explain what's happening.
There is no yarn build command. See https://classic.yarnpkg.com/en/docs/cli/run
As Anton mentioned, the second paragraph of his answer was the problem. The solution to the problem was removing special characters from a couple SVGs. I suspect uploading the dist folder as an artifact(zip) might have changed some of the file names altogether which was confusing to S3. By removing ® and + from the filename the issue was resolved.

Drone.io secrets not populating in yml appropriately and documentation seems inaccurate

I am running version 0.8.4 as a container in my lab. CLI is also at version 0.8.4
I am trying to use a secret in a command one of my containers is trying to run.
Following the documentation has me needing to sign a repo to allow the job to consume the secret. The drone CLI does not seem to have a
drone sign command for me to run. So I create the secret with a --skip-verify=true flag. This creates the secret but when I run the job it errors out. The output in the UI shows a blank space where the secret should be injected.
Here is an excerpt of my .drone.yml where I am trying to inject secrets -s production -u ${cf_user} -p ${cf_password} --s
I have tried all the following ways to create a secret:
drone secret add <repo_name> --name <key> --value <value> --skip-verify=true
drone secret add <repo_name> --name <key> --value <value>
GUI Creation
I notice when I create an all capital name value the UI represents the value in all lowercase when the CLI shows it in capitals.
I also notice that if I include hyphens in the name and try to use that in my drone.yml the job errors out immediately with a bad substitution error.
Any help understanding what I am doing wrong would be much appreciated!
I got lost in the different documentation available. Should have been looking here rather than secret-guide.
In case I am not alone, I needed to add a secrects block in my pipeline.
I also needed to access them with $SECRET_KEY rather than ${SECRET_KEY}
pipeline:
publish:
image: governmentpaas/cf-cli
secrets: [ cf_user, cf_password ]
Just a little update on this one, I stumbled over it as well because the docs are inconsistent.
In the 0.8.5 version the only thing I had to do is:
add secrets via CLI or UI
add secrets array to utilise it
no need to pass variables to environment.

Can't deploy using serverless framework from Windows 10

Attempt to deploy via serverless framework using Windows 10 fails:
C:\Users\xxxxxx>sls deploy --verbose Serverless: Packaging
service... Serverless: Excluding development dependencies...
Error --------------------------------------------------
EPERM: operation not permitted, scandir
'C:\Users\xxxxxx\AppData\Local\ElevatedDiagnostics' For debugging
logs, run again after setting the "SLS_DEBUG=*" environment variable.
Your Environment Information ----------------------------- OS: win32
Node Version: 6.11.2 Serverless Version: 1.19.0
Tried again with command prompt under elevated privileges:
EBUSY: resource busy or locked, scandir
'C:\Users\xxxxxx\AppData\Local\Microsoft\InputPersonalization\TextHarvester\WaitList.dat'
I assumed there was a permissions issue at first so I retried with the command prompt at full admin mode but just ran into the the second error. My research suggested an issue with windows search so I turned it off (and also all background apps). Trying again (and again) I just ran into more similar issues and am unable to deploy anything. Anyone had similar issues and found a way around them?
I worked it out finally, so in case anyone else encounters this issue here is a summary. There seem to be 2 issues:
Don't create functions in your root folder. Create a specific folder for your serverless function i.e. not in C:\Users\nnnnnn> but within your regular document storage. In Windows 10 it works nicely if you use a OneDrive folder, with the benefit that your function(s) are also then replicated to other dev machines that you might use (and are automatically backed up offsite).
More importantly, the serverless framework seems to have an issue if you attempt to deploy to a region other than the default region set in your aws CLI configuration. I've no idea why this should be since the credentials I use with the AWS CLI are authorised for all regions. I also have no idea why the issue should result in serverless attempting to access a whole series of windows files for which it has no authority but nevertheless...
In my case, I primarily use region ap-southeast-2. By default, SLS CREATE generates a serverless.yml using a default US region. If this is left as-is, there is then a mismatch between the deployment region and your AWS CLI region. Not good. To avoid the minor pain of having to specify a deployment region in the SLS deploy command, just update the deployment region in the serverless.yml file to match the CLI region.
Now works a treat...