How to exclude files from the packaging in serverless.yaml? - serverless-framework

I have the following configuration in my serverless.yaml file:
provider:
package:
exclude:
- ./**
include:
- src/**
But anyways all the folders in my root are being included in the service .zip file.
What am I missing here?

Move the package outside provider
provider:
package:
exclude:
- ./**
include:
- src/**
And if you have multiple lambdas in same file then you can add the package as such
functions:
Function1:
handler: functions_folder/.Function1.handler
package:
include:
- functions_folder/Function1.js

Related

Error: Rollup failed to resolve import "vue-router" when vite build in bitbucket

Error: Rollup failed to resolve import "vue-router" when vite build in bitbucket
bitbucket-pipelines.yml
image: node:16
pipelines:
branches:
master:
- step:
name: install
caches:
- node
script:
- npm i
- step:
name: build
caches:
- node
script:
- npm run build
artifacts: # defining the artifacts to be passed to each future step
- dist/**
- step:
name: Deploy to Serve
deployment: Production
script:
- pipe: atlassian/sftp-deploy:0.5.5
variables:
USER: $FTP_USERNAME
SERVER: $FTP_HOST
REMOTE_PATH: $FTP_SITE_ROOT
PASSWORD: $SFTP_PASSWORD
LOCAL_PATH: $BITBUCKET_CLONE_DIR/dist/*
DEBUG: 'true'
error informations:
vite]: Rollup failed to resolve import "vue-router" from "src/router/index.js".
This is most likely unintended because it can break your application at runtime.
If you do want to externalize this module explicitly add it to
build.rollupOptions.external
error during build:
Error: [vite]: Rollup failed to resolve import "vue-router" from "src/router/index.js".
This is most likely unintended because it can break your application at runtime.
If you do want to externalize this module explicitly add it to
build.rollupOptions.external
at onRollupWarning (/opt/atlassian/pipelines/agent/build/node_modules/vite/dist/node/chunks/dep-e0fe87f8.js:43253:19)
at onwarn (/opt/atlassian/pipelines/agent/build/node_modules/vite/dist/node/chunks/dep-e0fe87f8.js:43037:13)
at Object.onwarn (/opt/atlassian/pipelines/agent/build/node_modules/rollup/dist/shared/rollup.js:23003:13)
at ModuleLoader.handleResolveId (/opt/atlassian/pipelines/agent/build/node_modules/rollup/dist/shared/rollup.js:22347:26)
at /opt/atlassian/pipelines/agent/build/node_modules/rollup/dist/shared/rollup.js:22319:26

My .zip file size is 45mb but it's showing the error of large file size on AWS layer

I am using the serverless framework for the deployment. It's throwing the following error while we are deploying it on the AWS. But my zip file size is 45mb and unzipped size is 130mb on local.
Serverless Error ----------------------------------------
An error occurred: SharedLambdaLayer - Unzipped size must be smaller than 262144000 bytes (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 27f9378e-b9ea-42c5-ad73-a3b7cf9d584c).
This is my environment
Operating System: win32
Node Version: 12.19.0
Framework Version: 2.35.0
Plugin Version: 4.5.3
SDK Version: 4.2.2
Components Version: 3.8.2
Following is my .yml file content
service: rxd-layers
frameworkVersion: '2'
useDotenv: true
unresolvedVariablesNotificationMode: error
configValidationMode: error
plugins:
serverless-plugin-git-variables
serverless-dotenv-plugin
custom:
stageVariables:
gitBranch: ${opt:stage, git:branch}
package:
include:
- /nodejs/node_modules/shared # no need to add this yourself, this plugin does it for you
exclude:
- /nodejs/node_modules/**
- /nodejs/shared/**
provider:
stage: ${opt:stage, git:branch}
name: aws
runtime: nodejs12.x
region: ${env:AWS_REGION_CRED, 'us-east-1'}
versionFunctions: true
lambdaHashingVersion: 20201221
layers:
shared:
path: shared
description: This layer is for node packages of all services
resources:
Outputs:
SharedLayerExport:
Value:
Ref: SharedLambdaLayer
Export:
Name: SharedLambdaLayer
This is was due to geo-tz library. It was creating the unzip size almost more than 255MB for just geo-tz on my linux environment on AWS, this was the main problem. So I just uninstall this package. and after that My layer deployed correctly.

Serverless python packaging numpy dependency error

I have been running into issues when making function calls from my deployed Python3.7 Lambda function that, from the error message, are related to numpy. The issue states that there is an inability to import the package and despite trying many of the solutions I have read about, I haven't had any success and I am wondering what to test out next or how to debug further.
I have tried the following:
Install Docker, add the plugin serverless-python-requirements, configure in yml
Install packages in app directory to be bundled and deployed, pip install -t src/vendor -r requirements.txt --no-cache-dir
Uninstalled setuptools and numpy and reinstalled in that order
Error Message (Displayed after running sls invoke -f auth):
{
"errorMessage": "Unable to import module 'data': Unable to import required dependencies:\nnumpy: \n\nIMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!\n\nImporting the numpy c-extensions failed.\n- Try uninstalling and reinstalling numpy.\n- If you have already done that, then:\n 1. Check that you expected to use Python3.7 from \"/var/lang/bin/python3.7\",\n and that you have no directories in your PATH or PYTHONPATH that can\n interfere with the Python and numpy version \"1.18.1\" you're trying to use.\n 2. If (1) looks fine, you can open a new issue at\n https://github.com/numpy/numpy/issues. Please include details on:\n - how you installed Python\n - how you installed numpy\n - your operating system\n - whether or not you have multiple versions of Python installed\n - if you built from source, your compiler versions and ideally a build log\n\n- If you're working with a numpy git repository, try `git clean -xdf`\n (removes all files not under version control) and rebuild numpy.\n\nNote: this error has many possible causes, so please don't comment on\nan existing issue about this - open a new one instead.\n\nOriginal error was: No module named 'numpy.core._multiarray_umath'\n",
"errorType": "Runtime.ImportModuleError"
}
Provided is my setup:
OS: Mac OS X
Local Python: /Users/me/miniconda3/bin/python
Local Python version: Python 3.7.4
Serverless Environment Information (Runtime = Python3.7):
Operating System: darwin
Node Version: 12.14.0
Framework Version: 1.67.3
Plugin Version: 3.6.6
SDK Version: 2.3.0
Components Version: 2.29.1
Docker:
Docker version 19.03.13, build 4484c46d9d
serverless.yml:
service: understand-your-sleep-api
plugins:
- serverless-python-requirements
- serverless-offline-python
custom:
pythonRequirements:
dockerizePip: true # non-linux
slim: true
useStaticCache: false
useDownloadCache: false
invalidateCaches: true
provider:
name: aws
runtime: python3.7
stage: ${opt:stage, 'dev'}
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- ssm:GetParameter
Resource: "arn:aws:ssm:us-east-1:*id*:parameter/*"
environment:
STAGE: ${self:provider.stage}
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
package:
exclude:
- env.yml
- node_modules/**
requirements.txt:
pandas==1.0.0
fitbit==0.3.1
oauthlib==3.1.0
requests==2.22.0
requests-oauthlib==1.3.0
data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
import json
from datetime import timedelta, datetime, date
import math
import pandas as pd
from requests_oauthlib import OAuth2Session
from urllib.parse import urlparse, parse_qs
import fitbit
import requests
import webbrowser
import base64
import os
import logging
def auth(event, context):
...
Use lambda layer to pack all your requirements, Make sure you have numpy in requirements.txt file. Try it once.
This works only when serverless-python-requirements plugin is listed in plugins section.
Replace your custom key with this and give the functions a reference to use that layer
custom:
pythonRequirements:
layer: true
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
layers:
- { Ref: PythonRequirementsLambdaLayer}
I checked with zipinfo .requirements.zip and found that macos dynlib where loaded instead of linux so files
I fixed this by using dockerizePip: non-linux
be aware that this will not be triggered if in the working dir a .requirements.zip already exists so
git clean -xfd before running sls deploy
Since you are using serverless-python-requirements plugin, it will package the libraries for you. In order words, you don't need to do pip install -t src/vendor -r requirements.txt --no-cache-dir all that stuff manually.
To solve you problem, remove src/vendor and the following two lines in data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
Then sit back, and let serverless-python-requirements do the work for you.

How to include folder but excluding some files inside that folder when packaging for serverless?

I want to include node_modules, but exclude the .bin dir, and the .cache and .yarn-integrity files since they take up space on the lambda.
exclude:
- ./**
- '!node_modules/**'
- node_modules/.cache
- node_modules/.bin
- node_modules/.yarn-integrity
Like wise, I would like to include the 'server' folder, but exclude the tests and eslint config files:
exclude:
- ./**
- '!server/**'
- server/**/*.test.js
- server/.eslintrc.js
But neither work, and the files are not excluded. What's the correct way to do this?
You can include the node_modules/ dir while excluding the node_modules/.bin dir like this:
package:
exclude:
- node_modules/.bin/**
By default only these directories are excluded:
.git/**
.gitignore
.DS_Store
npm-debug.log
.serverless/**
.serverless_plugins/**
So you do not need to specify that node_modules/ and server/ are to be included - they will be be default. Just specify which sub-directories inside them you want to exclude.
Source: https://serverless.com/framework/docs/providers/aws/guide/packaging/

Travis pr failed, push passed

The branch was previously functional, then merged to master and the builds on master failed. Master was reverted, then master was merged into this branch and some fixes were made. When attempting to merge back to master, the build failed again with the following error. The push passed, the pr failed.
* What went wrong:
Could not resolve all files for configuration ':app:debugCompileClasspath'.
> Could not find com.squareup.leakcanary:leakcanary-android:1.5.4.
The travis.yml file:
sudo: false
language: android
android:
components:
- build-tools-27.0.2
- android-27
- sys-img-armeabi-v7a-android-27
jdk:
- oraclejdk8
before_install:
- yes | sdkmanager "platforms;android-27"
- chmod +x gradlew
#First app is built then unit tests are run
jobs:
include:
- stage: build
async: true
script: ./gradlew assemble
- stage: test
async: true
script: ./gradlew -w runUnitTests
notifications:
email:
recipients:
- email#me.com
on_success: always # default: change
on_failure: always # default: always
I felt maven repo outage today and faced the same issue. Hours later, I found that the failed Travis Job is working fine now. Do check it at your side.
Also, For any given scenario when classpath dependencies are missing one should check the build.gradle file rather than the .travis.yml file.
The failure message says that the app:debugCompileClasspath task is failing when looking for the com.squareup.leakcanary:leakcanary-android:1.5.4 (jar or AAR). Gradle allows you to define the repositories at the the root level
allProjects{
repositories {
maven() //Gradle has definition the points to https://jcenter.bintray.com/
}
}
So it will look into the following places for the class files or jar file.
Name: $ANDROID_HOME/extras/m2repository; url: file:/$ANDROID_HOME/extras/m2repository/
Name: $ANDROID_HOME/extras/google/m2repository; url: $ANDROID_HOME/extras/google/m2repository/
Name: $ANDROID_HOME/extras/android/m2repository; url: file:$ANDROID_HOME/extras/android/m2repository/
Name: BintrayJCenter; url: https://jcenter.bintray.com/
If not found the dependency resolution will fail giving the error mentioned above.