Celery error "Received 0x00 while expecting 0xce" - rabbitmq

I try to use celery. I have installed rabbit-mq by command from celery tutorial:
sudo apt-get install rabbitmq-server
And all worked well while I write my code in one file and run it to test functionality. But when I tried to add my code in Django views, and then to do concurrent requests to my views, I got this kind of exception:
File "/home/kinmanz/PycharmProjects/GitFace/myvenv/lib/python3.5/site-packages/amqp/connection.py", line 464, in drain_events
return self.blocking_read(timeout)
File "/home/kinmanz/PycharmProjects/GitFace/myvenv/lib/python3.5/site-packages/amqp/connection.py", line 468, in blocking_read
frame = self.transport.read_frame()
File "/home/kinmanz/PycharmProjects/GitFace/myvenv/lib/python3.5/site-packages/amqp/transport.py", line 251, in read_frame
'Received {0:#04x} while expecting 0xce'.format(ch))
amqp.exceptions.UnexpectedFrame: Received 0x00 while expecting 0xce
I think that problem may be in concurrency of request, and I should somehow to make queue concurrent safe.
I use Python 3.5, Celery 4.0.0, RabbitMQ 3.5.7
Actually problem in amqplib see answer below.

May be for someone who has the same problem, I will list possible solutions that I have managed to find. If you know better solution please add your answer or comment mine.
If you are using Python 2.x then see that issue https://github.com/celery/celery/issues/922
problem actually in amqplib if it change on librabbitmq all should be working, it's quite easy to do, see:
Framing Errors in Celery 3.0.1
But if you are using Python 3.x you can't solve that problem in that way, because there is no Python 3-compatible librabbitmq available, see that issue https://github.com/celery/celery/issues/2066 but in that case you can change your result backend on redis for example:
1) Install redis server:
$ sudo aptitude install redis-server
2) Change your app configuration
app = Celery('tasks', backend='redis://localhost', broker='pyamqp://')
Some useful links about installation redis: Setting up an asynchronous task queue for django using celery redis and Celery-redis quick guide
Also for Python 3 you can try to run celery worker in Python 2.7 while your app is working on Python 3, in that case don't forget install librabbitmq instead of amqplib. (This way seems to be inconvenient)

Related

Segmentation Error: Local Machine Fails (16gb) but AWS EC2 works (1gb)

I understand this is a little vague but not sure where else to go to or things to debug. My python script was running fine yesterday. I made minor changes today and now it only runs successfully on my Amazon LightSail (ec2) machine. Everything I read about segmentation errors is that there is not enough memory, however my local machine has 16gb of ram while the cloud machine only has 1gb. Plus I am not working with big files? The files being imported/manipulated are typically under 2mb and there are like 7-10 files.
I feel it may be something related to my terminal/zsh rather than my codes.
The below is the error code I can not seem to manage to get around.
I've done enough research to find the python faulthandler module import faulthandler; faulthandler.enable() to give the debugging below:
Fatal Python error: Segmentation fault
Current thread 0x000000010c58edc0 (most recent call first):
File "/Users/garrett/opt/anaconda3/lib/python3.7/site-packages/pandas/core/groupby/generic.py", line 1795 in <genexpr>
File "/Users/garrett/opt/anaconda3/lib/python3.7/site-packages/pandas/core/groupby/generic.py", line 1797 in <listcomp>
File "/Users/garrett/opt/anaconda3/lib/python3.7/site-packages/pandas/core/groupby/generic.py", line 1797 in count
File "GmailDownloader.py", line 215 in <module>
zsh: segmentation fault python *.py
The code seems to regularly break on line 215 while trying to compute a gorupby in pandas but it is very similar to other groupbys in the code that were successful before it.
I am on a Mac Catlina using the pre-baked zsh for my terminal handling but even when I switch to good ol' bash using chsh -s /bin/bash in my terminal and then running the code I still get a zsh segmentation error.
I have recently tried out PyCharm today and it asked for permissions to store something in a bin folder to which I just said yes. I'm not sure if that is correlated at all or not.
The full code repository: https://github.com/GarrettMarkScott/AutomotiveCRMPuller
Ongoing list of other things I have tried:
Trashing the Terminal preferences (~/Library/Preferences/com.apple.Terminal.plist)
I almost threw in the towel but tried to reinstall my pandas since it was mentioned in my bug error and what do you know it worked after running pip install --upgrade pandas
Would of been impossible without the FaultHandler! Hopefully this helps someone out there!

Celery worker receives unregeistered task from celerybeat run by systemd

On my staging server I had my celery worker(4.3.0) up and running with celery beat as daemons via systemd with RabbitMQ as broker. Everything was alright for few weeks just to the one moment 4 days ago when there was some sort of connection error between celery and amqp through kombu. [Errno 104] Connection reset by peer after started
I wasn't paying much of an attention to the server logs, since the project is in WiP stage, however when I tried to deploy newest version of the code, I realized that something is wrong with the worker.
I googled for the issue and that's what popped out:
https://github.com/celery/celery/issues/4867
The easy solution was to downgrade celery to 4.1.1 and wait till fix in future stable releases.
I removed celery, amqp, billiard and kombu from my venv, installed celery.4.1.1, which installed above packages in appropriate versions.
Atm services of celery and celerybeat are active, celerybeat sends the tasks to the celery worker, however celery logs shows me error message (please see error code of celery after downgrade). It is weird, because I haven't changed anything in task declarations or my settings ( which may be the issue here).
The weirdest thing is that if I shut down systemd services and run them with the commend:
celery -A celery_cfg:app worker -B --loglevel=DEBUG
All current tasks are being proceed as the past ones. So the celery and celerybeat configs as they are seems to be working.
Few pointed approaches I tried:
1) Made sure to import all modules without relatives imports.
2) In the past encountered issue with missing packages in venv --> they are up to date
3) Rebooted celery/celerybeat/gunicorn/systemd/rabbitmq and server itself
4) Double checked the paths in systemd services (however maybe I am debugging this to long and I just cant see the typo or something)
5) Tried with developing version 4.4.0rc2, (celery worker won't stand up)
6) Installed apps contains all required apps
Error message after downgrade of celery version
`2019-06-16 19:35:00,092: ERROR/MainProcess] Received unregistered task of type 'apps.mailing.tasks.execute_sending_system_mail'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you're using relative imports?
Please see
http://docs.celeryq.org/en/latest/internals/protocol.html
for more information.
The full contents of the message body was:
'[[], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]' (77b)
Traceback (most recent call last):
File "/home/user/apps/venv/loans/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 557, in on_task_received
strategy = strategies[type_]
KeyError: 'apps.mailing.tasks.execute_sending_system_mail'
Celery Service Systemd Code
Description=Celery Service
After=network.target
[Service]
Type=forking
User=<user>
Group=<user>
EnvironmentFile=/etc/default/celery
WorkingDirectory=/home/<user>/apps/loans
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
Celery Beat Service Systemd Code
Description=Celery Beat Service
After=network.target
[Service]
Type=simple
User=user
Group=user
EnvironmentFile=/etc/default/celery
WorkingDirectory=/home/user/apps/loans
ExecStart=/bin/sh -c '${CELERY_BIN} beat \
-A ${CELERY_APP} --pidfile=${CELERYBEAT_PID_FILE} \
--logfile=${CELERYBEAT_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL}'
[Install]
WantedBy=multi-user.target
Conf file for variables
CELERYD_NODES="w1"
CELERY_BIN="/home/user/apps/venv/loans/bin/celery"
CELERY_APP="celery_cfg:app"
CELERYD_MULTI="multi"
CELERYD_OPTS=""
CELERYD_PID_FILE="/home/user/apps/pids/celery/%n.pid"
CELERYD_LOG_FILE="/home/user/apps/logs/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
CELERYBEAT_PID_FILE="/home/user/apps/pids/celery/beat.pid"
CELERYBEAT_LOG_FILE="/home/user/apps/logs/celery/beat.log"
celery_cfg file
app = Celery('loans_apps')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
app.set_default()
# <====CELERY BEAT PERIODIC TASKS ====>
app.conf.beat_schedule = {
'execute_sending_system_mail': {
'task': 'apps.mailing.tasks.execute_sending_system_mail',
'schedule': crontab(minute='*/5'),
'args': (),
},
}
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
minor cut of settings containing celery cfg variables
BROKER_URL = 'amqp://localhost//',
CELERY_ENABLE_UTC = True
I know I can try setting celery and celerybeat without systemd, however I treat this as the last resort solution. I'd like to keep the conf as it was, even though I've no clue what's is wrong up there.
EDIT
By the mistake and guided by my friend I just found out, that both celery and celerybeat services seems to be working fine on user root, which is obviously not the solution but narrows down the number of possible flaws
It would be rude to leave the question unanswered, even though the answer comes from me, here it is:
If someone will ever encounter such issue, after following step pointed by me above, try to check for the permissions of directories which celery and celerybeat uses - You might have created them with root permissions, which may ends up with mentioned issue. Good luck to everyone in the future !

Setting up S3 logging in Airflow

This is driving me nuts.
I'm setting up airflow in a cloud environment. I have one server running the scheduler and the webserver and one server as a celery worker, and I'm using airflow 1.8.0.
Running jobs works fine. What refuses to work is logging.
I've set up the correct path in airflow.cfg on both servers:
remote_base_log_folder = s3://my-bucket/airflow_logs/
remote_log_conn_id = s3_logging_conn
I've set up s3_logging_conn in the airflow UI, with the access key and the secret key as described here.
I checked the connection using
s3 = airflow.hooks.S3Hook('s3_logging_conn')
s3.load_string('test','test',bucket_name='my-bucket')
This works on both servers. So the connection is properly set up. Yet all I get whenever I run a task is
*** Log file isn't local.
*** Fetching here: http://*******
*** Failed to fetch log file from worker.
*** Reading remote logs...
Could not read logs from s3://my-bucket/airflow_logs/my-dag/my-task/2018-02-15T21:46:47.577537
I tried manually uploading the log following the expected conventions and the webserver still can't pick it up - so the problem is on both ends. I'm at a loss at what to do, everything I've read so far tells me this should be working. I'm close to just installing the 1.9.0 which I hear changes logging and see if I'm more lucky.
UPDATE: I made a clean install of Airflow 1.9 and followed the specific instructions here.
Webserver won't even start now with the following error:
airflow.exceptions.AirflowConfigException: section/key [core/remote_logging] not found in config
There is an explicit reference to this section in this config template.
So I tried removing it and just loading the S3 handler without checking first and I got the following error message instead:
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/lib64/python3.6/logging/config.py", line 384, in resolve:
self.importer(used)
ModuleNotFoundError: No module named
'airflow.utils.log.logging_mixin.RedirectStdHandler';
'airflow.utils.log.logging_mixin' is not a package
I get the feeling that this shouldn't be this hard.
Any help would be much appreciated, cheers
Solved:
upgraded to 1.9
ran the steps described in this comment
added
[core]
remote_logging = True
to airflow.cfg
ran
pip install --upgrade airflow[log]
Everything's working fine now.

Stopping systemd service with salt-tack

I am using salt-stack to manage my production machine.
The minions run Raspbian and my and I have configured a systemd service. The services config file is located at /lib/systemd/system/my_service.service
When I run the following command:
sudo salt my_minion service.stop my_service
The following error is returned:
ERROR: Unable to run command ['/etc/init.d/my_service', 'stop'] with the context {'with_communicate': True, 'shell': False, 'env': {'LANG': 'en_GB.UTF-8', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'LC_ALL': 'C'}, 'stdout': -1, 'close_fds': True, 'stdin': None, 'stderr': -2, 'cwd': '/root'}, reason: [Errno 2] No such file or directory
I understand that salt tries to use sysvinit instead of systemd.
Is there any way to tell salt to use systemd?
EDIT:
Tried adding
providers:
service: systemd
to /etc/salt/minion as suggested by Eric. Still getting the same error
EDIT 2
The issue was fixed by using Erics suggestion + upgrading salt-minion to 2015.8.8 from 2015.8.3
This is almost certainly because newer Raspbian is based off of Debian 8, and Salt's systemd execution module does not properly detect newer Raspbian as needing systemd. Can the OP please reply to this message with the output from sudo salt my_minion grains.items? Please redact any grains which you feel have personally-identifiable information, I'm mainly interested in the grains that deal with OS name and version.
EDIT: One more thing. Please confirm that /run/systemd/system exists on the Raspbian box. What I think is happening here is two modules are both claiming to be the ones to provide the service module.
https://github.com/saltstack/salt/pull/32421 should fix this, but you can work around this immediately (without waiting for a new Salt release) by adding the following to /etc/salt/minion on your Raspbian minions:
providers:
service: systemd

How can I install matplotlib for my AWS Elastic Beanstalk application?

I'm having a hell of a time deploying matplotlib on AWS Elastic Beanstalk. I gather that my issue comes from some dependencies and the way that EB deploys packages installed with PIP, and have attempted to follow the instructions here on SO for resolving the issue.
I first tried incrementally deploying, as suggested in the linked answer, by adding pieces of the matplotlib package stack to my requirements.txt file in stages. But this takes forever (for each stage) and is prone to failure and timing out (which seems to leave build directories behind that stall subsequent package installations).
So the simple solution mentioned off-handedly at the end of the answer appeals to me: just eb ssh, activate the virtialenv with
source /opt/python/run/venv/bin/activate
and pip install packages manually. But I can't get this to work either. First I'm often confronted with left-beind build directories (as mentioned above)
pip can't proceed with requirement 'xxxx' due to a pre-existing build directory.
location: /opt/python/run/venv/build/xxxx
This is likely due to a previous installation that failed.
pip is being responsible and not assuming it can delete this.
Please delete it and try again.
But even after removing these, I consistently get
Exception:
Traceback (most recent call last):
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/req.py", line 1197, in prepare_files
do_download,
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/req.py", line 1375, in unpack_url
self.session,
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/download.py", line 582, in unpack_http_url
unpack_file(temp_location, location, content_type, link)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/util.py", line 625, in unpack_file
untar_file(filename, location)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/util.py", line 533, in untar_file
os.makedirs(location)
File "/opt/python/run/venv/lib64/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/opt/python/run/venv/build/xxxx'
in response to pip install xxxx (and sudo pip fails with sudo: pip: command not found).
What can I do to get this working on AWS-EB? In particular, what do I need to do to get the simple SSH+PIP approach working; or is there some other better — simpler! — approach I should try.
FWIW, I have a .ebextensions/software.config with
packages:
yum:
gcc-c++: []
gcc-gfortran: []
python-devel: []
atlas-sse3-devel: []
lapack-devel: []
libpng-devel: []
freetype-devel: []
zlib-devel: []
and a requirements.txt that ends with
pytz==2014.10
pyparsing==2.0.3
python-dateutil==2.4.0
nose==1.3.4
six>=1.8.0
mock==1.0.1
numpy==1.9.1
matplotlib==1.4.2
After about 4 hours, I've gotten far as numpy (as reported by pip list in the EB virtualenv).
And (in case it matters) the user who is SSHing is part in a group with the policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticbeanstalk:*",
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"s3:*",
"sns:*",
"cloudformation:*",
"rds:*",
"sqs:*",
"iam:PassRole"
],
"Resource": "*"
}
]
}
I have used many approaches to build and deploy numpy/scipy/matplotlib, on Windows as well as Linux systems. I have used system-provided package managers (aptitude, rpm), 3rd-party package managers (pypm), Python package managers (easy_install, pip), source releases, used different build environments/tools (GCC, but also Intel MKL, OpenMP). While doing so, I have run into many many quite annoying situations, but have also learned a lot about the pros and cons of each approach.
I have no experience with Elastic Beanstalk (EB), but I have experience with EC2. I see that you can SSH into an instance and poke around. So, what I suggest further below is based on
above-stated experiences and on
the more or less obvious boundary conditions regarding Beanstalk and on
your application scenario, described in another question here on SO and on
the fact that you just want to get things running, quickly
My suggestion: start off with not building these things yourself. Do not use pip. If possible, try to use the package manager of the Linux distribution in place and let it handle the installation of everything required for you, with a single command (e.g. sudo apt-get install python-matplotlib).
Disadvantages:
possibly old package versions, depending on the Linux distro in use
non-optimized builds (e.g. not built against e.g. Intel MKL or not leveraging OpenMP features or not using special instruction sets)
Advantages:
it quickly downloads, because packages are most likely cached near your machine
it quickly installs (these packages are pre-built, no compilation involved)
it just works
So, I hope you can just use aptitude or rpm or whatever on these machines and inherit the great work that the distribution package maintainers do for you, behind the scenes.
Once you are confident in your application and identified some bottleneck or issue, you might have reason to use a newer version of numpy/matplotlib/... or you might have reason to have a faster version of these, by creating an optimized build.
Edit: EB-related details of outlined approach
In the meantime we have learned that EB by default runs Amazon Linux which is based on Red Hat Enterprise Linux. Likewise, it uses yum as package manager and packages are in RPM format.
Amazon provides documentation about available packages. In Amazon Linux 2014.09, these packages are available: http://aws.amazon.com/de/amazon-linux-ami/2014.09-packages/
In this list we find
numpy-1.7.2
python-matplotlib-0.99.1.2
This version of matplotlib is very old, according to the changelog it is from September 2009: "2009-09-21 Tagged for release 0.99.1".
I did not anticipate it to be so old, but still, it might be sufficient for your needs. So we proceed with our plan (but I'd understand if that's a blocker).
Now, we have learned that system Python and EB Python are isolated from each other. That does not mean that EB Python cannot access system Python site packages. We just need it to tell so. A simple and clean method is to set up a proper directory structure with the packages that should be accessible to EB Python, and to communicate this directory to EB Python via sys.path.
Clearly, we need to customize the bootstrapping phase of EB containers. The available tools are documented here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
Obviously, we want to make use of the packages approach, and tell EB to install the numpy and python-matplotlib packages via yum. So the corresponding config file section should contain:
packages:
yum:
numpy: []
python-matplotlib: []
Explicitly mentioning numpy might not be necessary, it likely is a dependency of python-matplotlib.
Also, we need to make use of the commands section:
You can use the commands key to execute commands on the EC2 instance.
The commands are processed in alphabetical order by name, and they run
before the application and web server are set up and the application
version file is extracted.
The following three commands create above-mentioned directory, and set up symbolic links to the numpy/mpl installation paths (these paths hopefully are available in the moment these commands become executed):
commands:
00-create-dir:
command: "mkdir -p /opt/py26-selected-site-packages"
01-link-numpy:
command: "ln -s /usr/lib64/python2.6/site-packages/numpy /opt/py26-selected-site-packages/numpy"
02-link-mpl:
command: "ln -s /usr/lib64/python2.6/site-packages/matplotlib /opt/py26-selected-site-packages/matplotlib"
Two uncertainties: the AWS docs to not clarify that packages are processed before commands are executed. You have to try. It it does not work, use container_commands. Secondly, it is just an educated guess that /usr/lib64/python2.6/site-packages/matplotlib is available after installing python-matplotlib. It should be installed to this place, but it may end up somewhere else. Needs to be tested. Numpy should end up where specified as inferred from this article.
[UPDATE FROM SEB]
AWS documentation says "The cfn-init helper script processes these configuration sections in the following order: packages, groups, users, sources, files, commands, and then services."
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
So, your approach is safe
[/UPDATE]
The crucial step, as pointed out in the comments to this answer, is to tell your Python app where to look for packages. Direct modification of sys.path before attempting to import is a reliable method to take control of this. The following code adds our special directory to the selection of directories in which Python looks out for packages, and then attempts to import matplotlib:
sys.path.append("/opt/py26-selected-site-packages")
from matplotlib import pyplot
The order in sys.path defines priorities, so in case there is any other matplotlib or numpy package available in one of the other directories, it might be a better idea to
sys.path.insert(0, "/opt/py26-selected-site-packages")
However, this should not be necessary if our whole approach was well thought-through.
To add to Jan-Philip Answer :
AWS Elastic Beanstalk is using Amazon Linux distribution (except for .Net environments). Amazon Linux uses the yum package manager. MatPlotLib is available in Amazon's software repository.
[ec2-user#ip-1-1-1-174 ~]$ yum list | grep matplot
python-matplotlib.x86_64 0.99.1.2-1.6.amzn1 amzn-main
If this version is the one you need for your application, I would try to simply modify your .ebextensions/software.config file and to add the package to the yum section of it:
packages:
yum:
python-matplotlib: []
python-devel: []
atlas-sse3-devel: []
lapack-devel: []
libpng-devel: []
freetype-devel: []
zlib-devel: []
A last note about AWS Elastic BeansTalk and SSH.
While Amazon gives you the possibility to SSH to your Elastic Beanstalk instances, you should use this possibility only for debugging purposes, to understand why your app failed or is not installing as suggested.
Other than that, your deployment must be 100% automatic. When Elastic Beanstalk (Auto Scaling to be precise) will scale out your infrastructure (add more instances) or scale it in (terminate instances) depending on your application workload, all your manual configuration will be lost.
Best practices is to not install SSH keys on your production environment, it further reduces the surface of attacks.
I might be a bit late to this question, but as AWS and a lot of the cloud service providers are moving into Docker and taking into consideration that you haven't specified the platform . I have a fast solution to your question:
Use the generic docker platform.
I created some images with Python, Numpy, Scipy and Matplotlib preinstalled, so you can directly pull and start using them with one line of code.
Python 2.7(This one also has the versions that you were specifying for numpy and matplotlib)
sudo docker pull chuseuiti/pynuscimat2.7
Python 3.4
sudo docker pull chuseuiti/pynusci
However you can create your own image or modify existing images.
In case you want to automate your instances, you can pass a Dockerfile to AWS with the definition of your image.
Tip, in case you don't know about docker:
It is need to login before been able to pull:
sudo docker login
After pulling the image, you can generate and work in a container created from an image with the next code:
sudo docker run -i -t chuseuiti/pynuscimat2.7 bash
PS. At least with the free tier AWS is always complaining about running out of time with scipy and matplotlib, it takes too much time to install them, that is why I use this option.