Serverless python packaging numpy dependency error - numpy

I have been running into issues when making function calls from my deployed Python3.7 Lambda function that, from the error message, are related to numpy. The issue states that there is an inability to import the package and despite trying many of the solutions I have read about, I haven't had any success and I am wondering what to test out next or how to debug further.
I have tried the following:
Install Docker, add the plugin serverless-python-requirements, configure in yml
Install packages in app directory to be bundled and deployed, pip install -t src/vendor -r requirements.txt --no-cache-dir
Uninstalled setuptools and numpy and reinstalled in that order
Error Message (Displayed after running sls invoke -f auth):
{
"errorMessage": "Unable to import module 'data': Unable to import required dependencies:\nnumpy: \n\nIMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!\n\nImporting the numpy c-extensions failed.\n- Try uninstalling and reinstalling numpy.\n- If you have already done that, then:\n 1. Check that you expected to use Python3.7 from \"/var/lang/bin/python3.7\",\n and that you have no directories in your PATH or PYTHONPATH that can\n interfere with the Python and numpy version \"1.18.1\" you're trying to use.\n 2. If (1) looks fine, you can open a new issue at\n https://github.com/numpy/numpy/issues. Please include details on:\n - how you installed Python\n - how you installed numpy\n - your operating system\n - whether or not you have multiple versions of Python installed\n - if you built from source, your compiler versions and ideally a build log\n\n- If you're working with a numpy git repository, try `git clean -xdf`\n (removes all files not under version control) and rebuild numpy.\n\nNote: this error has many possible causes, so please don't comment on\nan existing issue about this - open a new one instead.\n\nOriginal error was: No module named 'numpy.core._multiarray_umath'\n",
"errorType": "Runtime.ImportModuleError"
}
Provided is my setup:
OS: Mac OS X
Local Python: /Users/me/miniconda3/bin/python
Local Python version: Python 3.7.4
Serverless Environment Information (Runtime = Python3.7):
Operating System: darwin
Node Version: 12.14.0
Framework Version: 1.67.3
Plugin Version: 3.6.6
SDK Version: 2.3.0
Components Version: 2.29.1
Docker:
Docker version 19.03.13, build 4484c46d9d
serverless.yml:
service: understand-your-sleep-api
plugins:
- serverless-python-requirements
- serverless-offline-python
custom:
pythonRequirements:
dockerizePip: true # non-linux
slim: true
useStaticCache: false
useDownloadCache: false
invalidateCaches: true
provider:
name: aws
runtime: python3.7
stage: ${opt:stage, 'dev'}
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- ssm:GetParameter
Resource: "arn:aws:ssm:us-east-1:*id*:parameter/*"
environment:
STAGE: ${self:provider.stage}
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
package:
exclude:
- env.yml
- node_modules/**
requirements.txt:
pandas==1.0.0
fitbit==0.3.1
oauthlib==3.1.0
requests==2.22.0
requests-oauthlib==1.3.0
data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
import json
from datetime import timedelta, datetime, date
import math
import pandas as pd
from requests_oauthlib import OAuth2Session
from urllib.parse import urlparse, parse_qs
import fitbit
import requests
import webbrowser
import base64
import os
import logging
def auth(event, context):
...

Use lambda layer to pack all your requirements, Make sure you have numpy in requirements.txt file. Try it once.
This works only when serverless-python-requirements plugin is listed in plugins section.
Replace your custom key with this and give the functions a reference to use that layer
custom:
pythonRequirements:
layer: true
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
layers:
- { Ref: PythonRequirementsLambdaLayer}

I checked with zipinfo .requirements.zip and found that macos dynlib where loaded instead of linux so files
I fixed this by using dockerizePip: non-linux
be aware that this will not be triggered if in the working dir a .requirements.zip already exists so
git clean -xfd before running sls deploy

Since you are using serverless-python-requirements plugin, it will package the libraries for you. In order words, you don't need to do pip install -t src/vendor -r requirements.txt --no-cache-dir all that stuff manually.
To solve you problem, remove src/vendor and the following two lines in data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
Then sit back, and let serverless-python-requirements do the work for you.

Related

Rapids on colab

I have always used following commands to install Rapids on Colab (from https://colab.research.google.com/drive/1rY7Ln6rEE1pOlfSHCYOVaqt8OvDO35J0#forceEdit=true&offline=true&sandboxMode=true)
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!python rapidsai-csp-utils/colab/env-check.py
!bash rapidsai-csp-utils/colab/update_gcc.sh
import os
os._exit(00)
import condacolab
condacolab.install()
import condacolab
condacolab.check()
# Installing RAPIDS is now 'python rapidsai-csp-utils/colab/install_rapids.py <release> <packages>'
# The <release> options are 'stable' and 'nightly'. Leaving it blank or adding any other words will default to stable.
!python rapidsai-csp-utils/colab/install_rapids.py stable
import os
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
os.environ['CONDA_PREFIX'] = '/usr/local'
it always worked, but lately I get
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
CondaHTTPError: HTTP 403 FORBIDDEN for url <https://conda.anaconda.org/rapidsai/linux-64/ucx-1.11.2 gef2bbcf-cuda11.2_0.tar.bz2>
Elapsed: 00:00.358595
I have retried several times but it doesnt work, how can I solve it?

`conda search PKG --info` shows different dependencies than what conda wants to install?

I'm building a new conda environment using python=3.9 for the
osx-arm64 architecture.
conda create -n py39 python=3.9 numpy
conda list
...
numpy 1.21.1 py39h1a24bff_2
...
python 3.9.7 hc70090a_1
So far so good: numpy=1.21.1 is the one i want. Now I want to add
scipy, and the first one seems to fit the bill:
conda search scipy --info
scipy 1.7.1 py39h2f0f56f_2
--------------------------
file name : scipy-1.7.1-py39h2f0f56f_2.conda
name : scipy
version : 1.7.1
build : py39h2f0f56f_2
build number: 2
size : 14.8 MB
license : BSD 3-Clause
subdir : osx-arm64
url : https://repo.anaconda.com/pkgs/main/osx-arm64/scipy-1.7.1-py39h2f0f56f_2.conda
md5 : edbd5a5399e973d1d0325147b7118f79
timestamp : 2021-08-25 16:12:39 UTC
dependencies:
- blas * openblas
- libcxx >=12.0.0
- libgfortran 5.*
- libgfortran5 >=11.1.0
- libopenblas >=0.3.17,<1.0a0
- numpy >=1.19.5,<2.0a0
- python >=3.9,<3.10.0a0
in particular, python >=3.9 and numpy >=1.19 seems just right.
but when i try the install
conda install scipy
...
The following packages will be DOWNGRADED:
numpy 1.21.1-py39h1a24bff_2 --> 1.19.5-py39habd9f23_3
(I have bumped into various constraints with numpy=1.19 (numba,
pandas,) and am trying to avoid it.)
Why isn't the scipy package happy with the numpy=1.21 version I
have?!
The only possible clue is that conda reports a different python
version (3.8.11) than the v3.9 I specified for this environment:
conda info
active environment : py39
active env location : .../miniconda3/envs/py39
shell level : 1
user config file : .../.condarc
populated config files : .../.condarc
conda version : 4.11.0
conda-build version : not installed
python version : 3.8.11.final.0 <-------------------
virtual packages : __osx=12.1=0
...
but all the environment's pointers seem to be set correctly:
(py39) % which python
.../miniconda3/envs/py39/bin/python
(py39) % python
Python 3.9.7 (default, Sep 16 2021, 23:53:23)
[Clang 12.0.0 ] :: Anaconda, Inc. on darwin
Thanks, any hints as to what's broken will be greatly appreciated!
I now have things working, but I'm afraid I can't point to a satisfying "answer." Others (eg #merv) seem to not be having the same problems and I can't identify the difference.
The one thing that I did find that seemed to create issues in my install was what seems to be some mislabeling of the pandas package: pandas v1.3.5 breaks a numpy==1.19.5 requirement that is the only way i've been able to push it thru. i posted a pandas issue comment

Error using brownie on Vscode while running scripts

I get the following ERROR when I try to run scripts with brownie, using the following PowerShell command;
brownie run scripts/simple_collectible/deploy_simple
I have look all over stacked and other pages for info on this and I can't seem to find much, I would really like to carry on with my project but I am stuck at this point. any hekp would be wonderful.
Cheers!
INFO MESSAGE:
PS C:\Users\charl\OneDrive\Desktop\NFT Development\NFT-mix-main> brownie run scripts/simple_collectible/deploy_simple
INFO: Could not find files for the given pattern(s).
Brownie v1.17.2 - Python development framework for Ethereum
NftMixMainProject is the active project.
Launching 'ganache-cli.cmd --port 8545 --gasLimit 12000000 --accounts 10 --hardfork istanbul --mnemonic brownie'...
File "C:\Users\charl.local\pipx\venvs\eth-brownie\lib\site-packages\brownie_cli_main_.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "C:\Users\charl.local\pipx\venvs\eth-brownie\lib\site-packages\brownie_cli\run.py", line 46, in main
path, _ = _get_path(args[""])
File "C:\Users\charl.local\pipx\venvs\eth-brownie\lib\site-packages\brownie\project\scripts.py", line 130, in _get_path
raise FileNotFoundError(f"Cannot find {path_str}")
FileNotFoundError: Cannot find scripts/simple_collectible/deploy_simple
Terminating local RPC client...
I have the following packages installed:
ganache-cli
pip
pipx
Brownie (installed through pipx, and initialized)
I have run the brownie command to make sure the install is good.
I have install Python Venv
I have tried uninstalling all packaged and reinstalling
I have done the same with my VScode and Vsbuildtools
I have done the same with Python itself (reinstalled from the website)
The code snippet I have for my Script that I am trying to run is here:
#!/usr/bin/python3
import os
from brownie import SimpleCollectible, accounts, config, network
def main():
dev = accounts.add(config["wallets"]["from_key"])
print(network.show_active())
publish_source = True if os.getenv("ETHERSCAN_TOKEN") else False
SimpleCollectible.deploy({"from": dev}, publish_source=publish_source)
And finally for your reference I have my brownie-config.yaml contents here:
# exclude SafeMath when calculating test coverage
# https://eth-brownie.readthedocs.io/en/v1.10.3/config.html#exclude_paths
reports:
exclude_contracts:
- SafeMath
dependencies:
- smartcontractkit/chainlink-brownie-contracts#1.1.1
- OpenZeppelin/openzeppelin-contracts#3.4.0
compiler:
solc:
remappings:
- '#chainlink=smartcontractkit/chainlink-brownie-contracts#1.1.1'
- '#openzeppelin=OpenZeppelin/openzeppelin-contracts#3.4.0'
# automatically fetch contract sources from Etherscan
autofetch_sources: True
dotenv: .env
# set a custom mnemonic for the development network
networks:
default: development
kovan:
vrf_coordinator: '0xdD3782915140c8f3b190B5D67eAc6dc5760C46E9'
link_token: '0xa36085F69e2889c224210F603D836748e7dC0088'
keyhash: '0x6c3699283bda56ad74f6b855546325b68d482e983852a7a82979cc4807b641f4'
fee: 100000000000000000
oracle: '0x2f90A6D021db21e1B2A077c5a37B3C7E75D15b7e'
jobId: '29fa9aa13bf1468788b7cc4a500a45b8'
eth_usd_price_feed: '0x9326BFA02ADD2366b30bacB125260Af641031331'
rinkeby:
vrf_coordinator: '0xb3dCcb4Cf7a26f6cf6B120Cf5A73875B7BBc655B'
link_token: '0x01be23585060835e02b77ef475b0cc51aa1e0709'
keyhash: '0x2ed0feb3e7fd2022120aa84fab1945545a9f2ffc9076fd6156fa96eaff4c1311'
fee: 100000000000000000
oracle: '0x7AFe1118Ea78C1eae84ca8feE5C65Bc76CcF879e'
jobId: '6d1bfe27e7034b1d87b5270556b17277'
eth_usd_price_feed: '0x8A753747A1Fa494EC906cE90E9f37563A8AF630e'
mumbai:
eth_usd_price_feed: '0x0715A7794a1dc8e42615F059dD6e406A6594651A'
binance:
# link_token: ??
eth_usd_price_feed: '0x9ef1B8c0E4F7dc8bF5719Ea496883DC6401d5b2e'
binance-fork:
eth_usd_price_feed: '0x9ef1B8c0E4F7dc8bF5719Ea496883DC6401d5b2e'
mainnet-fork:
eth_usd_price_feed: '0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419'
matic-fork:
eth_usd_price_feed: '0xF9680D99D6C9589e2a93a78A04A279e509205945'
wallets:
from_key: ${PRIVATE_KEY}
from_mnemonic: ${MNEMONIC}
# You'd have to change the accounts.add to accounts.from_mnemonic to use from_mnemonic
Change scripts/simple_collectible/deploy_simple -> scripts/simple_collectible/deploy_simple.py

Version mismatch: this is the 'cffi' RAPIDS on Colab

# Install RAPIDS
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!bash rapidsai-csp-utils/colab/rapids-colab.sh stable
import sys, os, shutil
sys.path.append('/usr/local/lib/python3.7/site-packages/')
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
os.environ["CONDA_PREFIX"] = "/usr/local"
for so in ['cudf', 'rmm', 'nccl', 'cuml', 'cugraph', 'xgboost', 'cuspatial']:
fn = 'lib'+so+'.so'
source_fn = '/usr/local/lib/'+fn
dest_fn = '/usr/lib/'+fn
if os.path.exists(source_fn):
print(f'Copying {source_fn} to {dest_fn}')
shutil.copyfile(source_fn, dest_fn)
# fix for BlazingSQL import issue
# ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by /usr/local/lib/python3.7/site-packages/../../libblazingsql-engine.so)
if not os.path.exists('/usr/lib64'):
os.makedirs('/usr/lib64')
for so_file in os.listdir('/usr/local/lib'):
if 'libstdc' in so_file:
shutil.copyfile('/usr/local/lib/'+so_file, '/usr/lib64/'+so_file)
shutil.copyfile('/usr/local/lib/'+so_file, '/usr/lib/x86_64-linux-gnu/'+so_file)
Im successfully able to install RAPIDS using the above script but I simply cant get ride of the following error:
Exception: Version mismatch: this is the 'cffi' package version 1.14.5, located in '/usr/local/lib/python3.7/dist-packages/cffi/api.py'. When we import the top-level '_cffi_backend' extension module, we get version 1.14.3, located in '/usr/local/lib/python3.7/site-packages/_cffi_backend.cpython-37m-x86_64-linux-gnu.so'. The two versions should be equal; check your installation.
I have tried everything from here and enter link description here, upgraded, downgraded, uninstalled, installed but nothing works. Any help would be greatly appreciated.

How is the correct syntax/call to get the sls-file working - salt

I'm trying to build a reactor sls file, which starts running when an event occurs.
The content of the sls file should be as the following cli commands:
sudo salt minion git.add /srv/salt .
sudo salt minion git.commit /srv/salt test
sudo salt minion git.push /srv/salt origin master identity=/home/autogit/.ssh/id_rsa
If i run the code bellow triggered by the reactor. I get the following error message.
[DEBUG ] Reactor is populating module client cache
[ERROR ] An un-handled exception from the multiprocessing process 'Reactor-9:1' was caught:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/utils/process.py", line 765, in _run
return self._original_run()
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 271, in run
self.call_reactions(chunks)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 228, in call_reactions
self.wrap.run(chunk)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 330, in run
self.populate_client_cache(low)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 324, in populate_client_cache
self.reaction_class[reaction_type](self.opts['conf_file'])
KeyError: u'module'
[CRITICAL] Engine 'reactor' could not be started!
I've tried different syntax (old style and new style) but couldn't figure out what the problem is. Always getting an KeyError: u'module' or u'git'.
Also tried it with runner function to run it locally on the master.
git pull:
module.run:
- git.pull:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- identity: /home/autogit/.ssh/id_rsa
- git.add:
- cwd: /srv/salt
- filename: .
- git.commit:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- git.push:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- identity: /home/autogit/.ssh/id_rsa
salt --versions-report
Salt Version:
Salt: 2019.2.0
Dependency Versions:
cffi: Not Installed
cherrypy: unknown
dateutil: 2.6.1
docker-py: Not Installed
gitdb: 2.0.3
gitpython: 2.1.8
ioflo: Not Installed
Jinja2: 2.10
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: 1.0.7
msgpack-pure: Not Installed
msgpack-python: 0.5.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.15rc1 (default, Nov 12 2018, 14:31:15)
python-gnupg: 0.4.1
PyYAML: 3.12
PyZMQ: 16.0.2
RAET: Not Installed
smmap: 2.0.3
timelib: Not Installed
Tornado: 4.5.3
ZMQ: 4.2.5
System Versions:
dist: Ubuntu 18.04 bionic
locale: UTF-8
machine: x86_64
release: 4.15.0-46-generic
system: Linux
version: Ubuntu 18.04 bionic
Since i'm quite new to Salt, hopefully you can give me a hint what i'm doing wrong:
You didn't provide the master config.
About module.run confusion: add in your settings (minion and maybe to master since I don't know your use-case)
use_superseded:
- module.run
That will enable your syntax, more doc about this here: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html#salt.states.module.run
In general: you are executing execution modules from the place that state modules are allowed only (the term module is heavily overused in salt...)
You didn't provide the full Master config. Reactor requires dedicated config to match events to sls files:
https://docs.saltstack.com/en/latest/ref/configuration/master.html#master-reactor-settings
You can also check the doc I've written some time ago about events and reactors:
https://github.com/kiemlicz/util/wiki/Salt-Events-and-Reactor
Assuming you've configured your events-to-sls-files-matching in master config, your provided sls:
git pull:
module.run:
- git.pull:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/
...
will not work.
Mind that reaction happens on Salt Master thus the reaction sls file need to provide type of reaction (local, runner etc.) since it's no longer 'view of one minion' but possibly of tons of minions!
First create runner reaction type (which delegates to some orchestration sls file which will contain your logic wrapped with (I think) salt.function )
Help yourself with aforementioned github link to my attempt of explaining Reactor.
Refer to official doc as well: https://docs.saltstack.com/en/latest/topics/reactor/index.html