I am connecting a remote worker to my Celery server (Django) for the first time. On my server, I have created a new username and password for a user, and set the permissions:
# rabbitmqctl add_user adcelery pwd
# rabbitmqctl set_permissions adcelery "^adcelery-.*" ".*" ".*"
# rabbitmqctl list_users
Listing users ...
guest [administrator]
adcelery []
...done.
# /etc/init.d/rabbitmq-server restart
# /etc/init.d/celeryd restart
My remote worker's URL:
BROKER_URL = "amqp://adcelery:pwd#mydomain.com/"
I am getting the following error on the startup of my remote worker. When I set "guest:guest" as my login in the BROKER_URL above, it connects perfectly fine. I'm sure I'm missing a step or two, any suggestions?
[2014-01-12 11:31:26,188: INFO/MainProcess] Connected to amqp://adcelery#awaaz.de:5672//
[2014-01-12 11:31:26,391: ERROR/MainProcess] Unrecoverable error: AccessRefused(403, u"ACCESS_REFUSED - access to exchange 'celeryev' in vhost '/' refused f
or user 'adcelery'", (40, 10), 'Exchange.declare')
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/worker/__init__.py", line 206, in start
self.blueprint.start(self)
File "/usr/local/lib/python2.7/dist-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/usr/local/lib/python2.7/dist-packages/celery/bootsteps.py", line 373, in start
return self.obj.start()
File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 270, in start
blueprint.start(self)
File "/usr/local/lib/python2.7/dist-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 498, in start
enabled=self.send_events, groups=self.groups,
File "/usr/local/lib/python2.7/dist-packages/celery/events/__init__.py", line 150, in __init__
self.enable()
File "/usr/local/lib/python2.7/dist-packages/celery/events/__init__.py", line 169, in enable
serializer=self.serializer)
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 82, in __init__
self.revive(self._channel)
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 216, in revive
self.declare()
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 102, in declare
self.exchange.declare()
File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 166, in declare
nowait=nowait, passive=passive,
File "/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 612, in exchange_declare
(40, 11), # Channel.exchange_declare_ok
File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 75, in wait
return self.dispatch_method(method_sig, args, content)
File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 93, in dispatch_method
return amqp_method(self, args)
File "/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 232, in _close
reply_code, reply_text, (class_id, method_id), ChannelError,
AccessRefused: Exchange.declare: (403) ACCESS_REFUSED - access to exchange 'celeryev' in vhost '/' refused for user 'adcelery'
Just found the answer in the docs. Need to add the user to the vhost:
rabbitmqctl set_permissions -p / adcelery ".*" ".*" ".*"
Related
I just installed odoo15, but when I tried to start it, I get "Internal Error Message", and I get the follwing message from log file:
return self.app(environ, start_response)
File "/opt/odoo/odoo/http.py", line 1464, in dispatch
explicit_session = self.setup_session(httprequest)
File "/opt/odoo/odoo/http.py", line 1345, in setup_session
session_gc(self.session_store)
File "/opt/odoo/odoo/tools/func.py", line 26, in __get__
value = self.fget(obj)
File "/opt/odoo/odoo/http.py", line 1291, in session_store
path = odoo.tools.config.session_dir
File "/opt/odoo/odoo/tools/config.py", line 714, in session_dir
assert os.access(d, os.W_OK), \
AssertionError: /var/lib/odoo/sessions: directory is not writable - - -
How to fix this issues please
Thanks
Make sure you start odoo with the user that is able to write in this directory.
My guess is that /var/lib/odoo/sessions is only writable by root.
Did you create a new user for odoo?
Try this (with odoo:odoo being the user and the group of your odoo system user):
sudo chown -R odoo:odoo /var/lib/odoo
Otherwise make also sure it IS writable:
sudo chmod +w /var/lib/odoo/sessions
I'm trying to transfer files from s3-gcs.
I don't own the s3 bucket and was provided with keys.
I edited my boto and entered the key & access id's but my gsutil cp command returned an access denied error. I can browse/dl these files with the various free s3 browser utilities out there.
Might the owner need to adjust something on their end?
gsutil cp -r s3://origin gs://destination
Copying s3://origin
17/_SUCCESS [Content-Type=binary/octet-stream]...
Exception in thread Thread-2: B]
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/google-cloud-
sdk/platform/gsutil/gslib/daisy_chain_wrapper.py", line 196, in
PerformDownload
decryption_tuple=self.decryption_tuple)
File "/usr/lib/google-cloud-
sdk/platform/gsutil/gslib/cloud_api_delegator.py", line 276, in
GetObjectMedia
decryption_tuple=decryption_tuple)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py",
line 513, in GetObjectMedia
generation=generation)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py",
line 1476, in _TranslateExceptionAndRaise
raise translated_exception
AccessDeniedException: AccessDeniedException: 403 AccessDenied
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message>
<RequestId>DD4EA91291B40907</RequestId>
In your SSH session, check what account is activated on the instance.
$ gcloud auth list
Then try using the -D or -DD command line options to debug why your example is failing. And you can try running gsutil with the top-level -D flag.
1) To copy from S3 to local disk
gsutil -D cp -r s3://secret-bucket/some_key/ /local/directory/my-s3-files/
2) To copy from local disk to GCS bucket
gsutil -D cp -r /local/directory/my-s3-files/ gs://secret-elsewhere/destination/
You can check Storage Transfer Service article to how transfer data into Cloud Storage.
We have to make sure permission error is in GCP or S3. Also, you can have a look at these articles for more information:
https://github.com/GoogleCloudPlatform/gsutil/issues/487
https://cloud.google.com/storage/docs/gsutil/commands/cp
I have scrapy and scrapy-splash set up on a AWS Ubuntu server. It works fine for a while, but after a few hours I'll start getting error messages like this;
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.5/site-
packages/twisted/internet/defer.py", line 1384, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/home/ubuntu/.local/lib/python3.5/site-
packages/twisted/python/failure.py", line 393, in throwExceptionIntoGe
nerator
return g.throw(self.type, self.value, self.tb)
File "/home/ubuntu/.local/lib/python3.5/site-
packages/scrapy/core/downloader/middleware.py", line 43, in process_re
quest
defer.returnValue((yield download_func(request=request,spider=spider)))
twisted.internet.error.ConnectionRefusedError: Connection was refused by
other side: 111: Connection refused.
I'll find that the splash process in docker has either terminated, or is unresponsive.
I've been running the splash process with;
sudo docker run -p 8050:8050 scrapinghub/splash
as per the scrapy-splash instructions.
I tried starting the process in a tmux shell to make sure the ssh connection is not interfering with the splah process, but no luck.
Thoughts?
You should run the container with --restart and -d options. See the documentation how to run Splash in production.
I download the zip file of GlassFish 4.1.1, after extract it, I use Terminal to start the server using asadmin start-domain command. It give me this error:
Traceback (most recent call last):
File "/usr/local/bin/asadmin", line 260, in <module> autoscale = boto.connect_autoscale()
File "/Library/Python/2.7/site-packages/boto/__init__.py", line 208, in connect_autoscale**kwargs)
File "/Library/Python/2.7/site-packages/boto/ec2/autoscale/__init__.py", line 115, in __init__profile_name=profile_name)
File "/Library/Python/2.7/site-packages/boto/connection.py", line 1100, in __init__provider=provider)
File "/Library/Python/2.7/site-packages/boto/connection.py", line 569, in __init__host, config, self.provider, self._required_auth_capability())
File "/Library/Python/2.7/site-packages/boto/auth.py", line 997, in get_auth_handler 'Check your credentials' % (len(names), str(names))) boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials
I'm using MacOS Sierra 10.12.2, anyone know how to fix that error?
The problem here is that you have the boto Python AWS command line utilities installed. One of those utilities is called asadmin and your shell thinks you mean to call the asadmin (AWS autoscaling admin) command, rather than the GlassFish asadmin file.
After you extract GlassFish, you need to reference the asadmin file that comes with GlassFish, so start the domain as follows:
glassfish4/bin/asadmin start-domain
I'm trying to connect to a remote rabbitmq host using the cli rabbitmqadmin.
The command I'm trying to execute is:
rabbitmqadmin --host=$RABBITMQ_HOST --port=443 --ssl --vhost=$RABBITMQ_VHOST --username=$RABBITMQ_USERNAME --password=$RABBITMQ_PASSWORD list queues
Before you ask: the environmental variables RABBITMQ_HOST, RABBITMQ_VHOST and so on are set... I double and triple checked this already.
The error I get back is:
Traceback (most recent call last):
File "/usr/local/sbin/rabbitmqadmin", line 1007, in <module>
main()
File "/usr/local/sbin/rabbitmqadmin", line 413, in main
method()
File "/usr/local/sbin/rabbitmqadmin", line 588, in invoke_list
format_list(self.get(uri), cols, obj_info, self.options)
File "/usr/local/sbin/rabbitmqadmin", line 436, in get
return self.http("GET", "%s/api%s" % (self.options.path_prefix, path), "")
File "/usr/local/sbin/rabbitmqadmin", line 475, in http
self.options.port)
File "/usr/local/sbin/rabbitmqadmin", line 451, in __initialize_https_connection
context = self.__initialize_tls_context())
File "/usr/local/sbin/rabbitmqadmin", line 467, in __initialize_tls_context
self.options.ssl_key_file)
TypeError: coercing to Unicode: need string or buffer, NoneType found
From the last line I assume it's a python related problem, my current python version is 2.7.12, if I try to connect to the local instance of rabbitmq with
rabbitmqadmin list queues
everything works fine. Any help is greatly appreciated thanks :)
shouldn't those env vars have a $ in front of them, and the params without =?
rabbitmqadmin --host $RABBITMQ_HOST --port 443 --ssl --vhost $RABBITMQ_VHOST --username $RABBITMQ_USERNAME --password $RABBITMQ_PASSWORD list queues`
maybe the = doesn't matter, but i'm pretty sure you need $ in front of the env vars
Validate that you are using the same rabbitmqadmin version as the version of your remote hosted broker. Using a mismatching rabbitmqadmin version will result in that error (for example rabbitmqadmin 3.6.4 querying a 3.5.7 server).
Browse to http://server-name:15672/cli/ and download correct tool from there.
https://github.com/rabbitmq/rabbitmq-management/issues/299