Zappa packaged lambda error ..botocore.exceptions.SSLError: SSL validation failed for <s3 file> [Errno 2] No such file or directory - amazon-s3

Running AWS lambda service packaged using Zappa.io
The service is running however, its not able to reach the S3 file due to ssl error
Getting the below error while trying to access remote_env from an s3 bucket
[1592935276008] [DEBUG] 2020-06-23T18:01:16.8Z b8374974-f820-484a-bcc3-64a530712769 Exception received when sending HTTP request.
Traceback (most recent call last):
File "/var/task/urllib3/util/ssl_.py", line 336, in ssl_wrap_socket
context.load_verify_locations(ca_certs, ca_cert_dir)
FileNotFoundError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/runtime/botocore/httpsession.py", line 254, in send
urllib_response = conn.urlopen(
File "/var/task/urllib3/connectionpool.py", line 719, in urlopen
retries = retries.increment(
File "/var/task/urllib3/util/retry.py", line 376, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/var/task/six.py", line 703, in reraise
raise value
File "/var/task/urllib3/connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "/var/task/urllib3/connectionpool.py", line 376, in _make_request
self._validate_conn(conn)
File "/var/task/urllib3/connectionpool.py", line 996, in _validate_conn
conn.connect()
File "/var/task/urllib3/connection.py", line 352, in connect
self.sock = ssl_wrap_socket(
File "/var/task/urllib3/util/ssl_.py", line 338, in ssl_wrap_socket
raise SSLError(e)
urllib3.exceptions.SSLError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/runtime/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/var/runtime/botocore/endpoint.py", line 244, in _send
return self.http_session.send(request)
File "/var/runtime/botocore/httpsession.py", line 281, in send
raise SSLError(endpoint_url=request.url, error=e)
botocore.exceptions.SSLError: SSL validation failed for ....... [Errno 2] No such file or directory
My Environment
Zappa version used: 0.51.0
Operating System and Python version: Ubuntu , Python 3.8
Output of pip freeze
appdirs==1.4.3
argcomplete==1.11.1
boto3==1.14.8
botocore==1.17.8
CacheControl==0.12.6
certifi==2019.11.28
cffi==1.14.0
cfn-flip==1.2.3
chardet==3.0.4
click==7.1.2
colorama==0.4.3
contextlib2==0.6.0
cryptography==2.9.2
distlib==0.3.0
distro==1.4.0
docutils==0.15.2
durationpy==0.5
Flask==1.1.2
Flask-Cors==3.0.8
future==0.18.2
h11==0.9.0
hjson==3.0.1
html5lib==1.0.1
httptools==0.1.1
idna==2.8
ipaddr==2.2.0
itsdangerous==1.1.0
Jinja2==2.11.2
jmespath==0.10.0
kappa==0.6.0
lockfile==0.12.2
mangum==0.9.2
MarkupSafe==1.1.1
msgpack==0.6.2
packaging==20.3
pep517==0.8.2
pip-tools==5.2.1
placebo==0.9.0
progress==1.5
pycparser==2.20
pydantic==1.5.1
PyMySQL==0.9.3
pyOpenSSL==19.1.0
pyparsing==2.4.6
python-dateutil==2.6.1
python-slugify==4.0.0
pytoml==0.1.21
PyYAML==5.3.1
requests==2.22.0
retrying==1.3.3
s3transfer==0.3.3
six==1.14.0
starlette==0.13.4
text-unidecode==1.3
toml==0.10.1
tqdm==4.46.1
troposphere==2.6.1
typing-extensions==3.7.4.2
urllib3==1.25.8
uvloop==0.14.0
webencodings==0.5.1
websockets==8.1
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.51.0
My zappa_settings.json:
{
"dev": {
"app_function": "main.app",
"aws_region": "us-west-2",
"profile_name": "default",
"project_name": "d3c",
"runtime": "python3.8",
"keep_warm":false,
"cors": true,
"s3_bucket": "my-lambda-deployables",
"remote_env":"<my remote s3 file>"
}
}
I have confirmed that my S3 file is accessible from my local ubuntu machine however does not work on aws

This seems to be related to an open issue open issue on Zappa
I had the same issue my Zappa deployment,
I tried all possible options but nothing was working, But after trying different suggestions the following steps worked for me
I copied python3.8/site-packages/botocore/cacert.pem to my lambda folder
I Set the "REQUESTS_CA_BUNDLE" environment variable to /var/task/cacert.pem
/var/task is where AWS Lambda extracts your zipped up code to.
How to set environment variables in Zappa
I updated my Zappa function and everything worked fine

fixed this by adding the cert path to environment (python)
os.environ['REQUESTS_CA_BUNDLE'] = os.path.join('/etc/ssl/certs/','ca-certificates.crt')
Edit: Sorry the issue was not really fixed with the above code, but found a hack work around by adding verify=False for all ssl requests
boto3.client('s3', verify=False)

Related

telethon.errors.rpcerrorlist.BotMethodInvalidError

The first time I touched Telethon, I changed the api_id and api_hash, and then ran the program, but the following error was reported:
Traceback (most recent call last):
File "scraper.py", line 370, in
client.loop.run_until_complete(main())
File "/root/.miniconda3/envs/python38/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "scraper.py", line 41, in main
await init_empty()
File "scraper.py", line 187, in init_empty
async for dialog in client.iter_dialogs():
File "/root/.miniconda3/envs/python38/lib/python3.8/site-packages/telethon/requestiter.py", line 74, in anext
if await self._load_next_chunk():
File "/root/.miniconda3/envs/python38/lib/python3.8/site-packages/telethon/client/dialogs.py", line 53, in _load_next_chunk
r = await self.client(self.request)
File "/root/.miniconda3/envs/python38/lib/python3.8/site-packages/telethon/client/users.py", line 30, in call
return await self._call(self._sender, request, ordered=ordered)
File "/root/.miniconda3/envs/python38/lib/python3.8/site-packages/telethon/client/users.py", line 84, in _call
result = await future
telethon.errors.rpcerrorlist.BotMethodInvalidError: The API access for bot users is restricted. The method you tried to invoke cannot be executed as a bot (caused by GetDialogsRequest)
May I ask what modifications I need to make to run this program(https://github.com/edogab33/telegram-groups-crawler)? Do I need a Telegram group file? Can you provide a simple example? Thank you.

Odoo Error to render compiling AST, UndefinedTable

I already had an database A (with Web, Sales, Invoices,...) on Odoo.com
But now I want to use that database on local computer. So I made a backup of database A, and then restore it on localhost:8069
The restoration was successful however when I can't connect to it.
Error to render compiling AST
UndefinedTable: relation "ir_attachment_id_seq" does not exist
LINE 1: ...ore_fname", "type", "website_id") VALUES (nextval('ir_attach...
^
Template: web.frontend_layout
Path: /t/html/head/t[10]
Node: <t t-call-assets="web.assets_common_minimal_js" t-css="false" defer_load="True"/>
And its traceback:
Traceback (most recent call last):
File "Q:\odoo-13.0\odoo\tools\cache.py", line 85, in lookup
r = d[key]
File "Q:\odoo-13.0\odoo\tools\func.py", line 69, in wrapper
return func(self, *args, **kwargs)
File "Q:\odoo-13.0\odoo\tools\lru.py", line 44, in __getitem__
a = self.d[obj].me
KeyError: ('ir.qweb', <function IrQWeb._get_asset_nodes at 0x0586C4F8>, 'web.assets_common_minimal_js', 'vi_VN', False, True, '', False, True, False, (1,))
During handling of the above exception, another exception occurred:
...
Could you guys please help me. Thanks in advance.
Your database is not loading well so these models are not loaded in registry, just restore it again!

bq cli tool unable to connect to server

I'm trying to use the bq tool in my docker container, but I am having difficulty connecting. I have ran gcloud init, also, I can query on webui and outside of docker container w/ bq.
bq query --format=prettyjson --nouse_legacy_sql 'select count(*) from `bigquery-public-data.samples.shakespeare`'
BigQuery error in query operation: Cannot contact server. Please try
again. Traceback: Traceback (most recent call last): File
"/usr/lib/google-cloud-sdk/platform/bq/bigquery_client.py", line 886,
in BuildApiClient response_metadata, discovery_document =
http.request(discovery_url) File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/oauth2client_4_0/transport.py",
line 176, in new_request redirections, connection_type) File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/oauth2client_4_0/transport.py",
line 283, in request connection_type=connection_type) File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/httplib2/init.py",
line 1626, in request (response, content) = self._request(conn,
authority, uri, request_uri, method, body, headers, redirections,
cachekey) File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/httplib2/init.py",
line 1368, in _request (response, content) = self._conn_request(conn,
request_uri, method, body, headers) File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/httplib2/init.py",
line 1288, in _conn_request conn.connect() File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/httplib2/init.py",
line 1082, in connect raise SSLHandshakeError(e) SSLHandshakeError:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
(_ssl.c:726)
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
The pip freeze is
airflow==1.8.0
alembic==0.8.10
asn1crypto==0.24.0
BigQuery-Python==1.14.0
boto==2.49.0
cachetools==2.1.0
certifi==2018.10.15
cffi==1.11.5
chardet==3.0.4
Click==7.0
croniter==0.3.25
cryptography==2.3.1
dill==0.2.8.2
docutils==0.14
enum34==1.1.6
filechunkio==1.8
Flask==0.11.1
Flask-Admin==1.4.1
Flask-Cache==0.13.1
Flask-Login==0.2.11
flask-swagger==0.2.13
Flask-WTF==0.12
funcsigs==1.0.0
future==0.15.2
futures==3.2.0
gitdb2==2.0.5
GitPython==2.1.11
google-api-core==1.5.1
google-api-python-client==1.5.5
google-auth==1.5.1
google-auth-httplib2==0.0.3
google-auth-oauthlib==0.2.0
google-cloud-bigquery==1.6.0
google-cloud-core==0.28.1
google-resumable-media==0.3.1
googleapis-common-protos==1.5.3
gunicorn==19.3.0
httplib2==0.11.3
idna==2.7
ipaddress==1.0.22
itsdangerous==1.1.0
JayDeBeApi==1.1.1
Jinja2==2.8.1
JPype1==0.6.3
lockfile==0.12.2
lxml==3.8.0
Mako==1.0.7
Markdown==2.6.11
MarkupSafe==1.0
MySQL-python==1.2.5
numpy==1.15.3
oauth2client==2.0.2
oauthlib==2.1.0
ordereddict==1.1
pandas==0.18.1
pandas-gbq==0.7.0
protobuf==3.6.1
psutil==4.4.2
psycopg2==2.7.5
pyasn1==0.4.4
pyasn1-modules==0.2.2
pycparser==2.19
Pygments==2.2.0
pyOpenSSL==18.0.0
python-daemon==2.1.2
python-dateutil==2.7.5
python-editor==1.0.3
python-nvd3==0.14.2
python-slugify==1.1.4
pytz==2018.7
PyYAML==3.13
requests==2.20.0
requests-oauthlib==1.0.0
rsa==4.0
setproctitle==1.1.10
six==1.11.0
smmap2==2.0.5
SQLAlchemy==1.2.12
tabulate==0.7.7
thrift==0.9.3
Unidecode==1.0.22
uritemplate==3.0.0
urllib3==1.24
Werkzeug==0.14.1
WTForms==2.2.1
zope.deprecation==4.3.0

Error occurred while adding a new consumer

I am getting following error when I am trying to add a new consumer(queue) from the current_app imported from celery by issuing the control command.
The details of logged error are as follows:
reply = current_app.control.add_consumer(queue_name, destination = WORKER_PROCESSES, reply = True)
File "/opt/msx/python-env/lib/python2.7/site-packages/celery/app/control.py", line 232, in add_consumer
**kwargs
File "/opt/msx/python-env/lib/python2.7/site-packages/celery/app/control.py", line 307, in broadcast
limit, callback, channel=channel,
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/pidbox.py", line 300, in _broadcast
channel=chan)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/pidbox.py", line 336, in _collect
with consumer:
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 396, in __enter__
self.consume()
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 445, in consume
self._basic_consume(T, no_ack=no_ack, nowait=False)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 567, in _basic_consume
no_ack=no_ack, nowait=nowait)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/entity.py", line 611, in consume
nowait=nowait)
File "/opt/msx/python-env/lib/python2.7/site-packages/librabbitmq/__init__.py", line 81, in basic_consume
no_local, no_ack, exclusive, arguments or {},
ChannelError: basic.consume: server channel error 404, message: NOT_FOUND - no queue '2795c73e-2b6a-34d6-bd1f-13de0d1e5497.reply.celery.pidbox' in vhost '/'
I didn't understand the error. I am passing a queue name different from the one mentioned in logs.
Any help will be appreciated. Thanks.
Note: This issue starts occurring after setting MAX_TASK_PER_CHILD value. Is this related to the error ?

Mongos + Pymongo 2.5 ==>No suitable hosts found

Our application is using pymongo. I'm trying to connect to mongos. The code fails on the following line
pymongo.MongoReplicaSetClient(ec2-aa-bbb-124-22.compute-1.amazonaws.com:27017,
replicaSet=self.class_settings['mongo_rs'])
Exception
/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Users/.../server_tornado.py --config=conf/development.conf --port=9001
Traceback (most recent call last):
File "/Users/..../server_tornado.py", line 319, in
BaseCatalog.db_instance = DBInit(config=settings)
File "/Users/..../lib/sc/singleton.py", line 20, in call
cls._instances[cls] = super(Singleton, cls).call(*args, **kwargs)
File "/Users/..../app/models/db_init.py", line 50, in init
raise Exception("init() => " + str(err))
Exception: init() => No suitable hosts found
Process finished with exit code 1`
Found the solution, if at all anyone faces this issue:
Using MongoClient instead of MongoReplicaClient fixes the issue. This is because Mongos acts like a single instance of mongodb.