I have the following celery configuration for my Django project hosted on heroku/git -
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'App.settings')
app = Celery('App')
app.conf.timezone = 'Europe/London'
app.config_from_object('django.conf:settings')
app.conf.update(BROKER_URL=str(os.getenv('REDIS_URL')),
CELERY_RESULT_BACKEND=str(os.getenv('REDIS_URL')),
broker_use_ssl = {
'ssl_cert_reqs': ssl.CERT_REQUIRED
},
redis_backend_use_ssl = {
'ssl_cert_reqs': ssl.CERT_REQUIRED
}
)
However when I run celery I get the following log error message.
ERROR/MainProcess] consumer: Cannot connect to rediss://****************//: Error 1 connecting to *************. [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
I can fix this by setting ssl_cert_reqs': ssl.CERT_NONE however then I receive the following warning -
Setting ssl_cert_reqs=CERT_NONE when connecting to redis means that celery will not valdate the identity of the redis broker when connecting. This leaves you vulnerable to man in the middle attacks.
Does anyone know how I can solve this to use SSL with Celery to protect my project going forward?
Have you tried updating your SSL certificate? For Unix, running something like this code in your IDE should do it (found on GitHub).
import os
import os.path
import ssl
import stat
import subprocess
import sys
STAT_0o775 = ( stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR
| stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP
| stat.S_IROTH | stat.S_IXOTH )
def main():
openssl_dir, openssl_cafile = os.path.split(
ssl.get_default_verify_paths().openssl_cafile)
print(" -- pip install --upgrade certifi")
subprocess.check_call([sys.executable,
"-E", "-s", "-m", "pip", "install", "--upgrade", "certifi"])
import certifi
# change working directory to the default SSL directory
os.chdir(openssl_dir)
relpath_to_certifi_cafile = os.path.relpath(certifi.where())
print(" -- removing any existing file or link")
try:
os.remove(openssl_cafile)
except FileNotFoundError:
pass
print(" -- creating symlink to certifi certificate bundle")
os.symlink(relpath_to_certifi_cafile, openssl_cafile)
print(" -- setting permissions")
os.chmod(openssl_cafile, STAT_0o775)
print(" -- update complete")
if __name__ == '__main__':
main()
Related
An error occurred when executing the following script using pyusb.
import usb.core
import usb.util
import sys
dev = usb.core.find(idVendor=0x1234, idProduct=0x1234)
if dev is None:
raise ValueError('Device not found')
print(dev)
dev.set_configuration()
The error content is as follows.
aise USBError(_strerror(ret), ret, _libusb_errno[ret])
usb.core.USBError: [Errno 16] Resource busy
I was able to get and print the correct Device object, but I got a problem with the configuration() part of the last line.
how can i solve this problem
[Environment]
python3
pyusb
kali linux
libusb-1.0-0:arm64
I have always used following commands to install Rapids on Colab (from https://colab.research.google.com/drive/1rY7Ln6rEE1pOlfSHCYOVaqt8OvDO35J0#forceEdit=true&offline=true&sandboxMode=true)
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!python rapidsai-csp-utils/colab/env-check.py
!bash rapidsai-csp-utils/colab/update_gcc.sh
import os
os._exit(00)
import condacolab
condacolab.install()
import condacolab
condacolab.check()
# Installing RAPIDS is now 'python rapidsai-csp-utils/colab/install_rapids.py <release> <packages>'
# The <release> options are 'stable' and 'nightly'. Leaving it blank or adding any other words will default to stable.
!python rapidsai-csp-utils/colab/install_rapids.py stable
import os
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
os.environ['CONDA_PREFIX'] = '/usr/local'
it always worked, but lately I get
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
CondaHTTPError: HTTP 403 FORBIDDEN for url <https://conda.anaconda.org/rapidsai/linux-64/ucx-1.11.2 gef2bbcf-cuda11.2_0.tar.bz2>
Elapsed: 00:00.358595
I have retried several times but it doesnt work, how can I solve it?
I'm just starting learning HLF, and I have an error while following tutorial from the docs: link
I downloaded fabric-samples using this command (replaced bit.ly link with the destination):
curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s -- 2.2.2 1.4.9
I run logspout in one terminal and try to execute peer lifecycle chaincode install basic.tar.gz in another one, and this is the result i get
Error: failed to retrieve endorser client for install: endorser client
failed to connect to localhost:7051: failed to create new connection:
context deadline exceeded
Log presented by Logspout:
peer0.org1.example.com|2022-03-15 13:03:24.452 UTC [core.comm]
ServerHandshake -> ERRO 04a Server TLS handshake failed in 2.650245ms
with error remote error: tls: bad certificate server=PeerServer
remoteaddress=172.22.0.1:61126
I set the envs in terminal as instructed in the docs, and I checked that CORE_PEER_TLS_ROOTCERT_FILE variable points to an existing file. The content of the file is the same as on the container.
What I tried to do:
download fabric-samples again and redo all the setup with copy-pasting the commands directly from docs
Do you have any suggestions where I can look for an issue?
I resolved the problem, I was using peer version 2.2.1 from previous experiments, it probably collided with FABRIC_CFG_PATH
I get the following ERROR when I try to run scripts with brownie, using the following PowerShell command;
brownie run scripts/simple_collectible/deploy_simple
I have look all over stacked and other pages for info on this and I can't seem to find much, I would really like to carry on with my project but I am stuck at this point. any hekp would be wonderful.
Cheers!
INFO MESSAGE:
PS C:\Users\charl\OneDrive\Desktop\NFT Development\NFT-mix-main> brownie run scripts/simple_collectible/deploy_simple
INFO: Could not find files for the given pattern(s).
Brownie v1.17.2 - Python development framework for Ethereum
NftMixMainProject is the active project.
Launching 'ganache-cli.cmd --port 8545 --gasLimit 12000000 --accounts 10 --hardfork istanbul --mnemonic brownie'...
File "C:\Users\charl.local\pipx\venvs\eth-brownie\lib\site-packages\brownie_cli_main_.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "C:\Users\charl.local\pipx\venvs\eth-brownie\lib\site-packages\brownie_cli\run.py", line 46, in main
path, _ = _get_path(args[""])
File "C:\Users\charl.local\pipx\venvs\eth-brownie\lib\site-packages\brownie\project\scripts.py", line 130, in _get_path
raise FileNotFoundError(f"Cannot find {path_str}")
FileNotFoundError: Cannot find scripts/simple_collectible/deploy_simple
Terminating local RPC client...
I have the following packages installed:
ganache-cli
pip
pipx
Brownie (installed through pipx, and initialized)
I have run the brownie command to make sure the install is good.
I have install Python Venv
I have tried uninstalling all packaged and reinstalling
I have done the same with my VScode and Vsbuildtools
I have done the same with Python itself (reinstalled from the website)
The code snippet I have for my Script that I am trying to run is here:
#!/usr/bin/python3
import os
from brownie import SimpleCollectible, accounts, config, network
def main():
dev = accounts.add(config["wallets"]["from_key"])
print(network.show_active())
publish_source = True if os.getenv("ETHERSCAN_TOKEN") else False
SimpleCollectible.deploy({"from": dev}, publish_source=publish_source)
And finally for your reference I have my brownie-config.yaml contents here:
# exclude SafeMath when calculating test coverage
# https://eth-brownie.readthedocs.io/en/v1.10.3/config.html#exclude_paths
reports:
exclude_contracts:
- SafeMath
dependencies:
- smartcontractkit/chainlink-brownie-contracts#1.1.1
- OpenZeppelin/openzeppelin-contracts#3.4.0
compiler:
solc:
remappings:
- '#chainlink=smartcontractkit/chainlink-brownie-contracts#1.1.1'
- '#openzeppelin=OpenZeppelin/openzeppelin-contracts#3.4.0'
# automatically fetch contract sources from Etherscan
autofetch_sources: True
dotenv: .env
# set a custom mnemonic for the development network
networks:
default: development
kovan:
vrf_coordinator: '0xdD3782915140c8f3b190B5D67eAc6dc5760C46E9'
link_token: '0xa36085F69e2889c224210F603D836748e7dC0088'
keyhash: '0x6c3699283bda56ad74f6b855546325b68d482e983852a7a82979cc4807b641f4'
fee: 100000000000000000
oracle: '0x2f90A6D021db21e1B2A077c5a37B3C7E75D15b7e'
jobId: '29fa9aa13bf1468788b7cc4a500a45b8'
eth_usd_price_feed: '0x9326BFA02ADD2366b30bacB125260Af641031331'
rinkeby:
vrf_coordinator: '0xb3dCcb4Cf7a26f6cf6B120Cf5A73875B7BBc655B'
link_token: '0x01be23585060835e02b77ef475b0cc51aa1e0709'
keyhash: '0x2ed0feb3e7fd2022120aa84fab1945545a9f2ffc9076fd6156fa96eaff4c1311'
fee: 100000000000000000
oracle: '0x7AFe1118Ea78C1eae84ca8feE5C65Bc76CcF879e'
jobId: '6d1bfe27e7034b1d87b5270556b17277'
eth_usd_price_feed: '0x8A753747A1Fa494EC906cE90E9f37563A8AF630e'
mumbai:
eth_usd_price_feed: '0x0715A7794a1dc8e42615F059dD6e406A6594651A'
binance:
# link_token: ??
eth_usd_price_feed: '0x9ef1B8c0E4F7dc8bF5719Ea496883DC6401d5b2e'
binance-fork:
eth_usd_price_feed: '0x9ef1B8c0E4F7dc8bF5719Ea496883DC6401d5b2e'
mainnet-fork:
eth_usd_price_feed: '0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419'
matic-fork:
eth_usd_price_feed: '0xF9680D99D6C9589e2a93a78A04A279e509205945'
wallets:
from_key: ${PRIVATE_KEY}
from_mnemonic: ${MNEMONIC}
# You'd have to change the accounts.add to accounts.from_mnemonic to use from_mnemonic
Change scripts/simple_collectible/deploy_simple -> scripts/simple_collectible/deploy_simple.py
I have been running into issues when making function calls from my deployed Python3.7 Lambda function that, from the error message, are related to numpy. The issue states that there is an inability to import the package and despite trying many of the solutions I have read about, I haven't had any success and I am wondering what to test out next or how to debug further.
I have tried the following:
Install Docker, add the plugin serverless-python-requirements, configure in yml
Install packages in app directory to be bundled and deployed, pip install -t src/vendor -r requirements.txt --no-cache-dir
Uninstalled setuptools and numpy and reinstalled in that order
Error Message (Displayed after running sls invoke -f auth):
{
"errorMessage": "Unable to import module 'data': Unable to import required dependencies:\nnumpy: \n\nIMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!\n\nImporting the numpy c-extensions failed.\n- Try uninstalling and reinstalling numpy.\n- If you have already done that, then:\n 1. Check that you expected to use Python3.7 from \"/var/lang/bin/python3.7\",\n and that you have no directories in your PATH or PYTHONPATH that can\n interfere with the Python and numpy version \"1.18.1\" you're trying to use.\n 2. If (1) looks fine, you can open a new issue at\n https://github.com/numpy/numpy/issues. Please include details on:\n - how you installed Python\n - how you installed numpy\n - your operating system\n - whether or not you have multiple versions of Python installed\n - if you built from source, your compiler versions and ideally a build log\n\n- If you're working with a numpy git repository, try `git clean -xdf`\n (removes all files not under version control) and rebuild numpy.\n\nNote: this error has many possible causes, so please don't comment on\nan existing issue about this - open a new one instead.\n\nOriginal error was: No module named 'numpy.core._multiarray_umath'\n",
"errorType": "Runtime.ImportModuleError"
}
Provided is my setup:
OS: Mac OS X
Local Python: /Users/me/miniconda3/bin/python
Local Python version: Python 3.7.4
Serverless Environment Information (Runtime = Python3.7):
Operating System: darwin
Node Version: 12.14.0
Framework Version: 1.67.3
Plugin Version: 3.6.6
SDK Version: 2.3.0
Components Version: 2.29.1
Docker:
Docker version 19.03.13, build 4484c46d9d
serverless.yml:
service: understand-your-sleep-api
plugins:
- serverless-python-requirements
- serverless-offline-python
custom:
pythonRequirements:
dockerizePip: true # non-linux
slim: true
useStaticCache: false
useDownloadCache: false
invalidateCaches: true
provider:
name: aws
runtime: python3.7
stage: ${opt:stage, 'dev'}
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- ssm:GetParameter
Resource: "arn:aws:ssm:us-east-1:*id*:parameter/*"
environment:
STAGE: ${self:provider.stage}
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
package:
exclude:
- env.yml
- node_modules/**
requirements.txt:
pandas==1.0.0
fitbit==0.3.1
oauthlib==3.1.0
requests==2.22.0
requests-oauthlib==1.3.0
data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
import json
from datetime import timedelta, datetime, date
import math
import pandas as pd
from requests_oauthlib import OAuth2Session
from urllib.parse import urlparse, parse_qs
import fitbit
import requests
import webbrowser
import base64
import os
import logging
def auth(event, context):
...
Use lambda layer to pack all your requirements, Make sure you have numpy in requirements.txt file. Try it once.
This works only when serverless-python-requirements plugin is listed in plugins section.
Replace your custom key with this and give the functions a reference to use that layer
custom:
pythonRequirements:
layer: true
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
layers:
- { Ref: PythonRequirementsLambdaLayer}
I checked with zipinfo .requirements.zip and found that macos dynlib where loaded instead of linux so files
I fixed this by using dockerizePip: non-linux
be aware that this will not be triggered if in the working dir a .requirements.zip already exists so
git clean -xfd before running sls deploy
Since you are using serverless-python-requirements plugin, it will package the libraries for you. In order words, you don't need to do pip install -t src/vendor -r requirements.txt --no-cache-dir all that stuff manually.
To solve you problem, remove src/vendor and the following two lines in data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
Then sit back, and let serverless-python-requirements do the work for you.