My .zip file size is 45mb but it's showing the error of large file size on AWS layer - serverless-framework

I am using the serverless framework for the deployment. It's throwing the following error while we are deploying it on the AWS. But my zip file size is 45mb and unzipped size is 130mb on local.
Serverless Error ----------------------------------------
An error occurred: SharedLambdaLayer - Unzipped size must be smaller than 262144000 bytes (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 27f9378e-b9ea-42c5-ad73-a3b7cf9d584c).
This is my environment
Operating System: win32
Node Version: 12.19.0
Framework Version: 2.35.0
Plugin Version: 4.5.3
SDK Version: 4.2.2
Components Version: 3.8.2
Following is my .yml file content
service: rxd-layers
frameworkVersion: '2'
useDotenv: true
unresolvedVariablesNotificationMode: error
configValidationMode: error
plugins:
serverless-plugin-git-variables
serverless-dotenv-plugin
custom:
stageVariables:
gitBranch: ${opt:stage, git:branch}
package:
include:
- /nodejs/node_modules/shared # no need to add this yourself, this plugin does it for you
exclude:
- /nodejs/node_modules/**
- /nodejs/shared/**
provider:
stage: ${opt:stage, git:branch}
name: aws
runtime: nodejs12.x
region: ${env:AWS_REGION_CRED, 'us-east-1'}
versionFunctions: true
lambdaHashingVersion: 20201221
layers:
shared:
path: shared
description: This layer is for node packages of all services
resources:
Outputs:
SharedLayerExport:
Value:
Ref: SharedLambdaLayer
Export:
Name: SharedLambdaLayer

This is was due to geo-tz library. It was creating the unzip size almost more than 255MB for just geo-tz on my linux environment on AWS, this was the main problem. So I just uninstall this package. and after that My layer deployed correctly.

Related

Trivy on EKS unable to scan any images

I am trying to scan all images deployed on my EKS cluster I am setting up for high security (will be deployed to classified IL5 environment). Kubernetes v1.23, all worker nodes run on Bottlerocket OS.
I expect images to be scanned and available in the VulnerabilityReports CRD.
I was able to successfully install Falco to the cluster (uses containerd). However, when deploying the official Helm chart (0.6.0-rc3) the scan-vulnerability containers start and then immediately error out. I set this environment variable on the trivy-operator deployment:
- name: CONTAINER_RUNTIME_ENDPOINT
value: /run/containerd/containerd.sock
Output of run with -debug:
{
"level": "error",
"ts": 1668286646.865245,
"logger": "reconciler.vulnerabilityreport",
"msg": "Scan job container",
"job": "trivy-system/scan-vulnerabilityreport-74f54b6cd",
"container": "discovery",
"status.reason": "Error",
"status.message": "2022-11-12T20:57:13.674Z\t\u001b[31mFATAL\u001b[0m\timage scan error: scan error: unable to initialize a scanner: unable to initialize a docker scanner: 4 errors occurred:\n\t* unable to inspect the image (023620263533.dkr.ecr.us-gov-east-1.amazonaws.com/docker.io/istio/pilot:1.15.2): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\t* unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory\n\t* containerd socket not found: /run/containerd/containerd.sock\n\t* GET https://023620263533.dkr.ecr.us-gov-east-1.amazonaws.com/v2/docker.io/istio/pilot/manifests/1.15.2: unexpected status code 401 Unauthorized: Not Authorized\n\n\n\n",
"stacktrace": "github.com/aquasecurity/trivy-operator/pkg/vulnerabilityreport.(*WorkloadController).processFailedScanJob\n\t/home/runner/work/trivy-operator/trivy-operator/pkg/vulnerabilityreport/controller.go:551\ngithub.com/aquasecurity/trivy-operator/pkg/vulnerabilityreport.(*WorkloadController).reconcileJobs.func1\n\t/home/runner/work/trivy-operator/trivy-operator/pkg/vulnerabilityreport/controller.go:376\nsigs.k8s.io/controller-runtime/pkg/reconcile.Func.Reconcile\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/reconcile/reconcile.go:102\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:121\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:320\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:234"
}
I confirmed that bottlerocket uses containerd, as /run/containerd/containerd.sock is specified on my Falco deployment. Even when I mount this as volume onto the pod and set the CONTAINER_RUNTIME_ENDPOINT to this path I get the same error.
Edit
I added the following security context:
seLinuxOptions:
user: system_u
role: system_r
type: control_t
level: s0-s0:c0.c1023
Initially I mounted the dockershim.sock from the host to the pod, then realized that was not necessary, the error messages were a little misleading, it was really an authentication with ECR issue. Furthermore, the seLinux flags needed to be specified at the pod level, and not the container level.

Serverless python packaging numpy dependency error

I have been running into issues when making function calls from my deployed Python3.7 Lambda function that, from the error message, are related to numpy. The issue states that there is an inability to import the package and despite trying many of the solutions I have read about, I haven't had any success and I am wondering what to test out next or how to debug further.
I have tried the following:
Install Docker, add the plugin serverless-python-requirements, configure in yml
Install packages in app directory to be bundled and deployed, pip install -t src/vendor -r requirements.txt --no-cache-dir
Uninstalled setuptools and numpy and reinstalled in that order
Error Message (Displayed after running sls invoke -f auth):
{
"errorMessage": "Unable to import module 'data': Unable to import required dependencies:\nnumpy: \n\nIMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!\n\nImporting the numpy c-extensions failed.\n- Try uninstalling and reinstalling numpy.\n- If you have already done that, then:\n 1. Check that you expected to use Python3.7 from \"/var/lang/bin/python3.7\",\n and that you have no directories in your PATH or PYTHONPATH that can\n interfere with the Python and numpy version \"1.18.1\" you're trying to use.\n 2. If (1) looks fine, you can open a new issue at\n https://github.com/numpy/numpy/issues. Please include details on:\n - how you installed Python\n - how you installed numpy\n - your operating system\n - whether or not you have multiple versions of Python installed\n - if you built from source, your compiler versions and ideally a build log\n\n- If you're working with a numpy git repository, try `git clean -xdf`\n (removes all files not under version control) and rebuild numpy.\n\nNote: this error has many possible causes, so please don't comment on\nan existing issue about this - open a new one instead.\n\nOriginal error was: No module named 'numpy.core._multiarray_umath'\n",
"errorType": "Runtime.ImportModuleError"
}
Provided is my setup:
OS: Mac OS X
Local Python: /Users/me/miniconda3/bin/python
Local Python version: Python 3.7.4
Serverless Environment Information (Runtime = Python3.7):
Operating System: darwin
Node Version: 12.14.0
Framework Version: 1.67.3
Plugin Version: 3.6.6
SDK Version: 2.3.0
Components Version: 2.29.1
Docker:
Docker version 19.03.13, build 4484c46d9d
serverless.yml:
service: understand-your-sleep-api
plugins:
- serverless-python-requirements
- serverless-offline-python
custom:
pythonRequirements:
dockerizePip: true # non-linux
slim: true
useStaticCache: false
useDownloadCache: false
invalidateCaches: true
provider:
name: aws
runtime: python3.7
stage: ${opt:stage, 'dev'}
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- ssm:GetParameter
Resource: "arn:aws:ssm:us-east-1:*id*:parameter/*"
environment:
STAGE: ${self:provider.stage}
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
package:
exclude:
- env.yml
- node_modules/**
requirements.txt:
pandas==1.0.0
fitbit==0.3.1
oauthlib==3.1.0
requests==2.22.0
requests-oauthlib==1.3.0
data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
import json
from datetime import timedelta, datetime, date
import math
import pandas as pd
from requests_oauthlib import OAuth2Session
from urllib.parse import urlparse, parse_qs
import fitbit
import requests
import webbrowser
import base64
import os
import logging
def auth(event, context):
...
Use lambda layer to pack all your requirements, Make sure you have numpy in requirements.txt file. Try it once.
This works only when serverless-python-requirements plugin is listed in plugins section.
Replace your custom key with this and give the functions a reference to use that layer
custom:
pythonRequirements:
layer: true
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
layers:
- { Ref: PythonRequirementsLambdaLayer}
I checked with zipinfo .requirements.zip and found that macos dynlib where loaded instead of linux so files
I fixed this by using dockerizePip: non-linux
be aware that this will not be triggered if in the working dir a .requirements.zip already exists so
git clean -xfd before running sls deploy
Since you are using serverless-python-requirements plugin, it will package the libraries for you. In order words, you don't need to do pip install -t src/vendor -r requirements.txt --no-cache-dir all that stuff manually.
To solve you problem, remove src/vendor and the following two lines in data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
Then sit back, and let serverless-python-requirements do the work for you.

How does Dask execute code on multiple vm's in the cloud

I wrote a program with dask and delayed and now I want to run it on several machines in the cloud. But there's one thing I don't understand - how does dask run the code on multiple machines in the cloud without having all the dependencies of the code?
When running on multiple machines Dask workers must have access to all required dependencies in order to be able to run your code.
You have labelled your question with dask-kubernetes so I'll use that as an example. By default dask-kubernetes uses the daskdev/dask Docker image to run your workers. This image contains Python and the minimal dependencies to run Dask distributed.
If your code requires an external dependency you must ensure this is installed in the image. The Dask docker image supports installing extra packages at runtime by setting either the EXTRA_APT_PACKAGES, EXTRA_CONDA_PACKAGES or EXTRA_PIP_PACKAGES environment variables.
# worker-spec.yml
kind: Pod
metadata:
labels:
foo: bar
spec:
restartPolicy: Never
containers:
- image: daskdev/dask:latest
imagePullPolicy: IfNotPresent
args: [dask-worker, --nthreads, '2', --no-dashboard, --memory-limit, 6GB, --death-timeout, '60']
name: dask
env:
- name: EXTRA_APT_PACKAGES
value: packagename # Some package to install with `apt install`
- name: EXTRA_PIP_PACKAGES
value: packagename # Some package to install with `pip install`
- name: EXTRA_CONDA_PACKAGES
value: packagename # Some package to install with `conda install`
resources:
limits:
cpu: "2"
memory: 6G
requests:
cpu: "2"
memory: 6G
from dask_kubernetes import KubeCluster
cluster = KubeCluster.from_yaml('worker-spec.yml')
The downside of this is that packages must be installed every time a worker starts, which can make adaptive scaling slow. So alternatively you can create your own Docker image with all your dependencies already installed and publish it to Docker Hub. Then use that instead in your configuration.
kind: Pod
metadata:
labels:
foo: bar
spec:
restartPolicy: Never
containers:
- image: me/mycustomimage:latest
imagePullPolicy: IfNotPresent
args: [dask-worker, --nthreads, '2', --no-dashboard, --memory-limit, 6GB, --death-timeout, '60']
name: dask
resources:
limits:
cpu: "2"
memory: 6G
requests:
cpu: "2"
memory: 6G

Spinnaker: How to bring custom boms into spinnaker pod to be able to deploy it with hal?

I would like to provide a custom BOM (Bill of Materials) to spinnaker so that I can configure the repos according to my needs. I am new to k8s, helm and spinnaker and altought reading the docu there are some things not entirely clear how to do that...
Problem
I do not have access to the gcs store nor do I have direct access to external repos. So I need to configure the artifactSources accordingly
According to the docu I guess I should provide a custom bom in a way that they be read from the filesystem - so inside the container spinnaker-spinnaker-halyard-0
My steps
Prepare kubernetes cluster
I do the initial deployment with helm something like
helm install stable/spinnaker --name=spinnaker --namespace=spinnaker -f values.yml
After I connect to the spinnaker-spinnaker-halyard-0 pod and run
$ kubectl.exe exec -it spinnaker-spinnaker-halyard-0 -n spinnaker bash
Deploy Spinnaker
According to the docu here and here I would do do something like
$ hal config version edit --version local:1.11.6
$ hal deploy apply
Which fails obviously cause there is no local bom file
Problems in Global:
! ERROR Unable to retrieve the Spinnaker bill of materials for
version "local:1.11.6": /home/spinnaker/.hal/.boms/bom/1.11.6.yml (No such file
or directory)
- Failed to prep Spinnaker deployment
Question: How to provide custom BOM?
According to the docu the boms are expected to be in a specific directory and structure ${HALCONFIG_DIR}/.boms/boms/${VERSION}.yml
So how to I get my custom POM there? When I look at the helm-chart I don't see (or understand) how I could do that e.g. via additional-config-maps
I found a way but it is manually done, working within the spinnaker-spinnaker-halyard container. I'm sure there is a better way...
I add a custom bom as additionalConfigMap to the values.yml file
...
additionalConfigMaps:
create: true
data:
# https://storage.googleapis.com/halconfig/bom
bom_1.12.4.yml: |
version: 1.12.4
timestamp: '2019-03-01 08:06:24'
services:
echo:
version: 2.3.1-20190214121429
commit: 5db9d437ca7f2fa374dcada17f77bbbb2965bd67
clouddriver:
version: 4.3.4-20190301030607
commit: b5539c47aad309e24428abb8f8303aff45323b43
deck:
version: 2.7.4-20190228030607
commit: dccdd730886a6beb0388e3622581d8da1ed8edbb
fiat:
version: 1.3.2-20190128153726
commit: daf21b24330a5f22866601559aa0f7ac99590274
front50:
version: 0.15.2-20190222161456
commit: 3105e86b8c084ad6ad78507e3a5e5a427f290b99
gate:
version: 1.5.2-20190301030607
commit: b238ab993ab25381ce907260879548ed74a4953f
igor:
version: 1.1.1-20190213190226
commit: 63d06a5c5d55f07443dd60a81035b35cf96238e7
kayenta:
version: 0.6.1-20190221030610
commit: 81d906bf8307143f40fe88f8554baa318de25ef1
orca:
version: 2.3.1-20190220030610
commit: bad45e78566449117b678a3317552cf53d0dd14e
rosco:
version: 0.9.0-20190123170846
commit: 42f81a2501de6d40676d47661579a6106b5c3e8a
defaultArtifact: {}
monitoring-third-party:
version: 0.11.2-20190222030609
commit: 232c84a8a87cecbc17f157dd180643a8b2e6067a
monitoring-daemon:
version: 0.11.2-20190222030609
commit: 232c84a8a87cecbc17f157dd180643a8b2e6067a
dependencies:
redis:
version: 2:2.8.4-2
consul:
version: 0.7.5
vault:
version: 0.7.0
artifactSources:
debianRepository: https://nexus.intra/repository/spinnaker-releases/
dockerRegistry: nexus.intra:5000/spinnaker-marketplace
googleImageProject: marketplace-spinnaker-release
gitPrefix: https://scm.intra/scm/SPIN/repos
Then link the custom bom in the halyard container, configure the version and run the deployment
mkdir -p ~/.hal/.boms/bom
ln -s /opt/halyard/additionalConfigMaps/bom_1.12.4.yml ~/.hal/.boms/bom/1.12.4.yml
hal config version edit --version local:1.12.4
hal deploy apply

Travis pr failed, push passed

The branch was previously functional, then merged to master and the builds on master failed. Master was reverted, then master was merged into this branch and some fixes were made. When attempting to merge back to master, the build failed again with the following error. The push passed, the pr failed.
* What went wrong:
Could not resolve all files for configuration ':app:debugCompileClasspath'.
> Could not find com.squareup.leakcanary:leakcanary-android:1.5.4.
The travis.yml file:
sudo: false
language: android
android:
components:
- build-tools-27.0.2
- android-27
- sys-img-armeabi-v7a-android-27
jdk:
- oraclejdk8
before_install:
- yes | sdkmanager "platforms;android-27"
- chmod +x gradlew
#First app is built then unit tests are run
jobs:
include:
- stage: build
async: true
script: ./gradlew assemble
- stage: test
async: true
script: ./gradlew -w runUnitTests
notifications:
email:
recipients:
- email#me.com
on_success: always # default: change
on_failure: always # default: always
I felt maven repo outage today and faced the same issue. Hours later, I found that the failed Travis Job is working fine now. Do check it at your side.
Also, For any given scenario when classpath dependencies are missing one should check the build.gradle file rather than the .travis.yml file.
The failure message says that the app:debugCompileClasspath task is failing when looking for the com.squareup.leakcanary:leakcanary-android:1.5.4 (jar or AAR). Gradle allows you to define the repositories at the the root level
allProjects{
repositories {
maven() //Gradle has definition the points to https://jcenter.bintray.com/
}
}
So it will look into the following places for the class files or jar file.
Name: $ANDROID_HOME/extras/m2repository; url: file:/$ANDROID_HOME/extras/m2repository/
Name: $ANDROID_HOME/extras/google/m2repository; url: $ANDROID_HOME/extras/google/m2repository/
Name: $ANDROID_HOME/extras/android/m2repository; url: file:$ANDROID_HOME/extras/android/m2repository/
Name: BintrayJCenter; url: https://jcenter.bintray.com/
If not found the dependency resolution will fail giving the error mentioned above.