I am trying to automate the deployment of a rabbitmq processing chain with Ansible.
The rabbitmq user setup below works perfectly
rabbitmq_user:
user: admin
password: supersecret
read_priv: .*
write_priv: .*
configure_priv: .*
But the queue setup below crashes
rabbitmq_queue:
name: feedfiles
login_host: 127.0.0.1
login_user: admin
login_password: admin
login_port: 5672
the crash log looks like this:
{"changed": false, "module_stderr": "Shared connection to 127.0.0.1 closed.
", "module_stdout": "
Traceback (most recent call last):
File "/tmp/ansible_i8T24e/ansible_module_rabbitmq_queue.py", line 285, in <module>
main()
File "/tmp/ansible_i8T24e/ansible_module_rabbitmq_queue.py", line 178, in main
r = requests.get(url, auth=(module.params['login_user'], module.params['login_password']))
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 487,in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=15672): Max retries exceeded with url: /api/queues/%2F/feedfiles (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fcfdaf8e7d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
", "msg": "MODULE FAILURE", "rc": 1}
I have voluntarily removed the guest/guest default setup, which is why I am using the admin credentials
Any idea where the issue could come from?
EDIT:
setting the admin user tag "administrator" doesn't help
Related
Running AWS lambda service packaged using Zappa.io
The service is running however, its not able to reach the S3 file due to ssl error
Getting the below error while trying to access remote_env from an s3 bucket
[1592935276008] [DEBUG] 2020-06-23T18:01:16.8Z b8374974-f820-484a-bcc3-64a530712769 Exception received when sending HTTP request.
Traceback (most recent call last):
File "/var/task/urllib3/util/ssl_.py", line 336, in ssl_wrap_socket
context.load_verify_locations(ca_certs, ca_cert_dir)
FileNotFoundError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/runtime/botocore/httpsession.py", line 254, in send
urllib_response = conn.urlopen(
File "/var/task/urllib3/connectionpool.py", line 719, in urlopen
retries = retries.increment(
File "/var/task/urllib3/util/retry.py", line 376, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/var/task/six.py", line 703, in reraise
raise value
File "/var/task/urllib3/connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "/var/task/urllib3/connectionpool.py", line 376, in _make_request
self._validate_conn(conn)
File "/var/task/urllib3/connectionpool.py", line 996, in _validate_conn
conn.connect()
File "/var/task/urllib3/connection.py", line 352, in connect
self.sock = ssl_wrap_socket(
File "/var/task/urllib3/util/ssl_.py", line 338, in ssl_wrap_socket
raise SSLError(e)
urllib3.exceptions.SSLError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/runtime/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/var/runtime/botocore/endpoint.py", line 244, in _send
return self.http_session.send(request)
File "/var/runtime/botocore/httpsession.py", line 281, in send
raise SSLError(endpoint_url=request.url, error=e)
botocore.exceptions.SSLError: SSL validation failed for ....... [Errno 2] No such file or directory
My Environment
Zappa version used: 0.51.0
Operating System and Python version: Ubuntu , Python 3.8
Output of pip freeze
appdirs==1.4.3
argcomplete==1.11.1
boto3==1.14.8
botocore==1.17.8
CacheControl==0.12.6
certifi==2019.11.28
cffi==1.14.0
cfn-flip==1.2.3
chardet==3.0.4
click==7.1.2
colorama==0.4.3
contextlib2==0.6.0
cryptography==2.9.2
distlib==0.3.0
distro==1.4.0
docutils==0.15.2
durationpy==0.5
Flask==1.1.2
Flask-Cors==3.0.8
future==0.18.2
h11==0.9.0
hjson==3.0.1
html5lib==1.0.1
httptools==0.1.1
idna==2.8
ipaddr==2.2.0
itsdangerous==1.1.0
Jinja2==2.11.2
jmespath==0.10.0
kappa==0.6.0
lockfile==0.12.2
mangum==0.9.2
MarkupSafe==1.1.1
msgpack==0.6.2
packaging==20.3
pep517==0.8.2
pip-tools==5.2.1
placebo==0.9.0
progress==1.5
pycparser==2.20
pydantic==1.5.1
PyMySQL==0.9.3
pyOpenSSL==19.1.0
pyparsing==2.4.6
python-dateutil==2.6.1
python-slugify==4.0.0
pytoml==0.1.21
PyYAML==5.3.1
requests==2.22.0
retrying==1.3.3
s3transfer==0.3.3
six==1.14.0
starlette==0.13.4
text-unidecode==1.3
toml==0.10.1
tqdm==4.46.1
troposphere==2.6.1
typing-extensions==3.7.4.2
urllib3==1.25.8
uvloop==0.14.0
webencodings==0.5.1
websockets==8.1
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.51.0
My zappa_settings.json:
{
"dev": {
"app_function": "main.app",
"aws_region": "us-west-2",
"profile_name": "default",
"project_name": "d3c",
"runtime": "python3.8",
"keep_warm":false,
"cors": true,
"s3_bucket": "my-lambda-deployables",
"remote_env":"<my remote s3 file>"
}
}
I have confirmed that my S3 file is accessible from my local ubuntu machine however does not work on aws
This seems to be related to an open issue open issue on Zappa
I had the same issue my Zappa deployment,
I tried all possible options but nothing was working, But after trying different suggestions the following steps worked for me
I copied python3.8/site-packages/botocore/cacert.pem to my lambda folder
I Set the "REQUESTS_CA_BUNDLE" environment variable to /var/task/cacert.pem
/var/task is where AWS Lambda extracts your zipped up code to.
How to set environment variables in Zappa
I updated my Zappa function and everything worked fine
fixed this by adding the cert path to environment (python)
os.environ['REQUESTS_CA_BUNDLE'] = os.path.join('/etc/ssl/certs/','ca-certificates.crt')
Edit: Sorry the issue was not really fixed with the above code, but found a hack work around by adding verify=False for all ssl requests
boto3.client('s3', verify=False)
I'm trying to connect AWS elasticache(redis in cluster mode) with TLS enabled, the library versions and django cache settings as below
====Dependencies======
redis==3.0.0
redis-py-cluster==2.0.0
django-redis==4.11.0
======settings=======
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': "redis://xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com:6379/0",
'OPTIONS': {
'PASSWORD': '<password>',
'REDIS_CLIENT_CLASS': 'rediscluster.RedisCluster',
'CONNECTION_POOL_CLASS': 'rediscluster.connection.ClusterConnectionPool',
'CONNECTION_POOL_KWARGS': {
'skip_full_coverage_check': True,
"ssl_cert_reqs": False,
"ssl": True
}
}
}
}
It doesn't seem to be a problem with client-class(provided by redis-py-cluster) since I'm able to access
from rediscluster import RedisCluster
startup_nodes = [{"host": "redis://xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com", "port": "6379"}]
rc = RedisCluster(startup_nodes=startup_nodes, ssl=True, ssl_cert_reqs=False, decode_responses=True, skip_full_coverage_check=True, password='<password>')
rc.set("foo", "bar")
rc.get('foo')
'bar'
but I'm seeing this error when django service is trying to access the cache, is there any configuration detail that I might be missing?
File "/usr/lib/python3.6/site-packages/django_redis/cache.py", line 32, in _decorator
return method(self, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/django_redis/cache.py", line 81, in get
client=client)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 194, in get
client = self.get_client(write=False)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 90, in get_client
self._clients[index] = self.connect(index)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 103, in connect
return self.connection_factory.connect(self._server[index])
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 64, in connect
connection = self.get_connection(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 75, in get_connection
pool = self.get_or_create_connection_pool(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 94, in get_or_create_connection_pool
self._pools[key] = self.get_connection_pool(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 107, in get_connection_pool
pool = self.pool_cls.from_url(**cp_params)
File "/usr/lib/python3.6/site-packages/redis/connection.py", line 916, in from_url
return cls(**kwargs)
File "/usr/lib/python3.6/site-packages/rediscluster/connection.py", line 146, in __init__
self.nodes.initialize()
File "/usr/lib/python3.6/site-packages/rediscluster/nodemanager.py", line 172, in initialize
raise RedisClusterException("ERROR sending 'cluster slots' command to redis server: {0}".format(node))
rediscluster.exceptions.RedisClusterException: ERROR sending 'cluster slots' command to redis server: {'host': 'xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com', 'port': '6379'}
I also tried passing "ssl_ca_certs": "/etc/ssl/certs/ca-certificates.crt" to CONNECTION_POOL_KWARGS and setting the location scheme to rediss still no luck
you need to change ssl_cert_reqs=False to ssl_cert_reqs=None
Here's the link to the redis Python git repo that points to this:
https://github.com/andymccurdy/redis-py#ssl-connections
I am using the
telepot.Bot(bot_id).sendAudio(chat_id, file_url)
method, is supposed to send the file, but it returns
Traceback (most recent call last):
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\__init__.py", line 1158, in collector
callback(item)
File "bot.py", line 72, in handle
bot.sendAudio(chat_id, url)
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\__init__.py", line 556, in sendAudio
return self._api_request_with_file('sendAudio', _rectify(p), 'audio', audio)
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\__init__.py", line 496, in _api_request_with_file
return self._api_request(method, _rectify(params), **kwargs)
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\__init__.py", line 491, in _api_request
return api.request((self._token, method, params, files), **kwargs)
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\api.py", line 155, in request
return _parse(r)
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\api.py", line 150, in _parse
raise exception.TelegramError(description, error_code, data)
telepot.exception.TelegramError: ('Bad Request: wrong HTTP URL specified', 400, {'ok': False, 'error_code': 400, 'description': 'Bad Request: wrong HTTP URL specified'})
the same happened with sendPhoto, but I used python requests to send photos
response =requests.post('https://api.telegram.org/bot/sendphoto', files=files`)
I either want to know why the sendAudio() and sendPhoto() methods work or the http url to send audio
with telepot bot.SendPhoto and bot.sendVideo and bot.sendAudio work either with files and urls that contains a file.
In your case it seems that you used and url and it was uncorrect, can you share it?
In my experience it can be because the url contains & instead of &
I'm trying to use the bq tool in my docker container, but I am having difficulty connecting. I have ran gcloud init, also, I can query on webui and outside of docker container w/ bq.
bq query --format=prettyjson --nouse_legacy_sql 'select count(*) from `bigquery-public-data.samples.shakespeare`'
BigQuery error in query operation: Cannot contact server. Please try
again. Traceback: Traceback (most recent call last): File
"/usr/lib/google-cloud-sdk/platform/bq/bigquery_client.py", line 886,
in BuildApiClient response_metadata, discovery_document =
http.request(discovery_url) File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/oauth2client_4_0/transport.py",
line 176, in new_request redirections, connection_type) File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/oauth2client_4_0/transport.py",
line 283, in request connection_type=connection_type) File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/httplib2/init.py",
line 1626, in request (response, content) = self._request(conn,
authority, uri, request_uri, method, body, headers, redirections,
cachekey) File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/httplib2/init.py",
line 1368, in _request (response, content) = self._conn_request(conn,
request_uri, method, body, headers) File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/httplib2/init.py",
line 1288, in _conn_request conn.connect() File
"/usr/lib/google-cloud-sdk/platform/bq/third_party/httplib2/init.py",
line 1082, in connect raise SSLHandshakeError(e) SSLHandshakeError:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
(_ssl.c:726)
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
The pip freeze is
airflow==1.8.0
alembic==0.8.10
asn1crypto==0.24.0
BigQuery-Python==1.14.0
boto==2.49.0
cachetools==2.1.0
certifi==2018.10.15
cffi==1.11.5
chardet==3.0.4
Click==7.0
croniter==0.3.25
cryptography==2.3.1
dill==0.2.8.2
docutils==0.14
enum34==1.1.6
filechunkio==1.8
Flask==0.11.1
Flask-Admin==1.4.1
Flask-Cache==0.13.1
Flask-Login==0.2.11
flask-swagger==0.2.13
Flask-WTF==0.12
funcsigs==1.0.0
future==0.15.2
futures==3.2.0
gitdb2==2.0.5
GitPython==2.1.11
google-api-core==1.5.1
google-api-python-client==1.5.5
google-auth==1.5.1
google-auth-httplib2==0.0.3
google-auth-oauthlib==0.2.0
google-cloud-bigquery==1.6.0
google-cloud-core==0.28.1
google-resumable-media==0.3.1
googleapis-common-protos==1.5.3
gunicorn==19.3.0
httplib2==0.11.3
idna==2.7
ipaddress==1.0.22
itsdangerous==1.1.0
JayDeBeApi==1.1.1
Jinja2==2.8.1
JPype1==0.6.3
lockfile==0.12.2
lxml==3.8.0
Mako==1.0.7
Markdown==2.6.11
MarkupSafe==1.0
MySQL-python==1.2.5
numpy==1.15.3
oauth2client==2.0.2
oauthlib==2.1.0
ordereddict==1.1
pandas==0.18.1
pandas-gbq==0.7.0
protobuf==3.6.1
psutil==4.4.2
psycopg2==2.7.5
pyasn1==0.4.4
pyasn1-modules==0.2.2
pycparser==2.19
Pygments==2.2.0
pyOpenSSL==18.0.0
python-daemon==2.1.2
python-dateutil==2.7.5
python-editor==1.0.3
python-nvd3==0.14.2
python-slugify==1.1.4
pytz==2018.7
PyYAML==3.13
requests==2.20.0
requests-oauthlib==1.0.0
rsa==4.0
setproctitle==1.1.10
six==1.11.0
smmap2==2.0.5
SQLAlchemy==1.2.12
tabulate==0.7.7
thrift==0.9.3
Unidecode==1.0.22
uritemplate==3.0.0
urllib3==1.24
Werkzeug==0.14.1
WTForms==2.2.1
zope.deprecation==4.3.0
I am getting following error when I am trying to add a new consumer(queue) from the current_app imported from celery by issuing the control command.
The details of logged error are as follows:
reply = current_app.control.add_consumer(queue_name, destination = WORKER_PROCESSES, reply = True)
File "/opt/msx/python-env/lib/python2.7/site-packages/celery/app/control.py", line 232, in add_consumer
**kwargs
File "/opt/msx/python-env/lib/python2.7/site-packages/celery/app/control.py", line 307, in broadcast
limit, callback, channel=channel,
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/pidbox.py", line 300, in _broadcast
channel=chan)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/pidbox.py", line 336, in _collect
with consumer:
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 396, in __enter__
self.consume()
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 445, in consume
self._basic_consume(T, no_ack=no_ack, nowait=False)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 567, in _basic_consume
no_ack=no_ack, nowait=nowait)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/entity.py", line 611, in consume
nowait=nowait)
File "/opt/msx/python-env/lib/python2.7/site-packages/librabbitmq/__init__.py", line 81, in basic_consume
no_local, no_ack, exclusive, arguments or {},
ChannelError: basic.consume: server channel error 404, message: NOT_FOUND - no queue '2795c73e-2b6a-34d6-bd1f-13de0d1e5497.reply.celery.pidbox' in vhost '/'
I didn't understand the error. I am passing a queue name different from the one mentioned in logs.
Any help will be appreciated. Thanks.
Note: This issue starts occurring after setting MAX_TASK_PER_CHILD value. Is this related to the error ?