I just installed odoo15, but when I tried to start it, I get "Internal Error Message", and I get the follwing message from log file:
return self.app(environ, start_response)
File "/opt/odoo/odoo/http.py", line 1464, in dispatch
explicit_session = self.setup_session(httprequest)
File "/opt/odoo/odoo/http.py", line 1345, in setup_session
session_gc(self.session_store)
File "/opt/odoo/odoo/tools/func.py", line 26, in __get__
value = self.fget(obj)
File "/opt/odoo/odoo/http.py", line 1291, in session_store
path = odoo.tools.config.session_dir
File "/opt/odoo/odoo/tools/config.py", line 714, in session_dir
assert os.access(d, os.W_OK), \
AssertionError: /var/lib/odoo/sessions: directory is not writable - - -
How to fix this issues please
Thanks
Make sure you start odoo with the user that is able to write in this directory.
My guess is that /var/lib/odoo/sessions is only writable by root.
Did you create a new user for odoo?
Try this (with odoo:odoo being the user and the group of your odoo system user):
sudo chown -R odoo:odoo /var/lib/odoo
Otherwise make also sure it IS writable:
sudo chmod +w /var/lib/odoo/sessions
Related
I am trying to use kaggle command line tool and I am running into problems with using it inside my own vm. I downloaded the API token from the site and placed it in /.kaggle/kaggle.json on windows. My vm has ubuntu installed and in the Vagrant file I have the following:
config.vm.synced_folder ENV['HOME'] + "/.kaggle", "/home/ubuntu/.kaggle", mount_options: ['dmode=700,fmode=700']
config.vm.provision "shell", inline: <<-SHELL
echo "export KAGGLE_CONFIG_DIR='/home/ubuntu/.kaggle/kaggle.json'" >> /etc/profile.d/myvar.sh
SHELL
when running env command in the vm I see it is correct:
KAGGLE_CONFIG_DIR=/home/ubuntu/.kaggle/kaggle.json
However, when I try to use the kaggle command for example kaggle -h I get the the following
(main) vagrant#dev:/home/ubuntu/.kaggle$ ls
kaggle.json
(main) vagrant#dev:/home/ubuntu/.kaggle$ kaggle -h
Traceback (most recent call last):
File "/user/home/venvs/main/bin/kaggle", line 5, in <module>
from kaggle.cli import main
File "/user/home/venvs/main/lib/python3.7/site-packages/kaggle/__init__.py", line 23, in <module>
api.authenticate()
File "/user/home/venvs/main/lib/python3.7/site-packages/kaggle/api/kaggle_api_extended.py", line 149, in authenticate
self.config_file, self.config_dir))
OSError: Could not find kaggle.json. Make sure it's located in /home/ubuntu/.kaggle/kaggle.json. Or use the environment method.
The paths are all correct and the file is where it should be looking for it. Anyone know what the issue could be? Is it because it is mounted?
Alright, I misread the instructions: "You can define a shell environment variable KAGGLE_CONFIG_DIR to change this location to $KAGGLE_CONFIG_DIR/kaggle.json"
So the env variable should be /home/ubuntu/.kaggle/ instead of /home/ubuntu/.kaggle/kaggle.json.
I'm trying to transfer files from s3-gcs.
I don't own the s3 bucket and was provided with keys.
I edited my boto and entered the key & access id's but my gsutil cp command returned an access denied error. I can browse/dl these files with the various free s3 browser utilities out there.
Might the owner need to adjust something on their end?
gsutil cp -r s3://origin gs://destination
Copying s3://origin
17/_SUCCESS [Content-Type=binary/octet-stream]...
Exception in thread Thread-2: B]
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/google-cloud-
sdk/platform/gsutil/gslib/daisy_chain_wrapper.py", line 196, in
PerformDownload
decryption_tuple=self.decryption_tuple)
File "/usr/lib/google-cloud-
sdk/platform/gsutil/gslib/cloud_api_delegator.py", line 276, in
GetObjectMedia
decryption_tuple=decryption_tuple)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py",
line 513, in GetObjectMedia
generation=generation)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py",
line 1476, in _TranslateExceptionAndRaise
raise translated_exception
AccessDeniedException: AccessDeniedException: 403 AccessDenied
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message>
<RequestId>DD4EA91291B40907</RequestId>
In your SSH session, check what account is activated on the instance.
$ gcloud auth list
Then try using the -D or -DD command line options to debug why your example is failing. And you can try running gsutil with the top-level -D flag.
1) To copy from S3 to local disk
gsutil -D cp -r s3://secret-bucket/some_key/ /local/directory/my-s3-files/
2) To copy from local disk to GCS bucket
gsutil -D cp -r /local/directory/my-s3-files/ gs://secret-elsewhere/destination/
You can check Storage Transfer Service article to how transfer data into Cloud Storage.
We have to make sure permission error is in GCP or S3. Also, you can have a look at these articles for more information:
https://github.com/GoogleCloudPlatform/gsutil/issues/487
https://cloud.google.com/storage/docs/gsutil/commands/cp
I'm trying to connect to a remote rabbitmq host using the cli rabbitmqadmin.
The command I'm trying to execute is:
rabbitmqadmin --host=$RABBITMQ_HOST --port=443 --ssl --vhost=$RABBITMQ_VHOST --username=$RABBITMQ_USERNAME --password=$RABBITMQ_PASSWORD list queues
Before you ask: the environmental variables RABBITMQ_HOST, RABBITMQ_VHOST and so on are set... I double and triple checked this already.
The error I get back is:
Traceback (most recent call last):
File "/usr/local/sbin/rabbitmqadmin", line 1007, in <module>
main()
File "/usr/local/sbin/rabbitmqadmin", line 413, in main
method()
File "/usr/local/sbin/rabbitmqadmin", line 588, in invoke_list
format_list(self.get(uri), cols, obj_info, self.options)
File "/usr/local/sbin/rabbitmqadmin", line 436, in get
return self.http("GET", "%s/api%s" % (self.options.path_prefix, path), "")
File "/usr/local/sbin/rabbitmqadmin", line 475, in http
self.options.port)
File "/usr/local/sbin/rabbitmqadmin", line 451, in __initialize_https_connection
context = self.__initialize_tls_context())
File "/usr/local/sbin/rabbitmqadmin", line 467, in __initialize_tls_context
self.options.ssl_key_file)
TypeError: coercing to Unicode: need string or buffer, NoneType found
From the last line I assume it's a python related problem, my current python version is 2.7.12, if I try to connect to the local instance of rabbitmq with
rabbitmqadmin list queues
everything works fine. Any help is greatly appreciated thanks :)
shouldn't those env vars have a $ in front of them, and the params without =?
rabbitmqadmin --host $RABBITMQ_HOST --port 443 --ssl --vhost $RABBITMQ_VHOST --username $RABBITMQ_USERNAME --password $RABBITMQ_PASSWORD list queues`
maybe the = doesn't matter, but i'm pretty sure you need $ in front of the env vars
Validate that you are using the same rabbitmqadmin version as the version of your remote hosted broker. Using a mismatching rabbitmqadmin version will result in that error (for example rabbitmqadmin 3.6.4 querying a 3.5.7 server).
Browse to http://server-name:15672/cli/ and download correct tool from there.
https://github.com/rabbitmq/rabbitmq-management/issues/299
My goal is to have a optical LINC switch running and use Ryu-oe to control it.
I receive the following error when I try to run Ryu-Oe instruction from this link.
Ryu-oe is just ryu controller with some optical extensions.
File "/usr/local/bin/ryu-manager", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2749, in <module>
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 446, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 459, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 628, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: msgpack-python>=0.4.0
Anyone knows how I can solve the error?
Ok it seems that the problem is solved. To be honest I don't know how it was solved. Here are some of the commands I ran:
Make sure you are in ryu-oe directory.
sudo -H ./run_tests.sh
sudo ./run_tests.sh
sudo -H python ./setup.py install
and then I ran sudo ryu-manager ~/ryu-oe/ryu/app/ofctl_rest.py.
Let me know which one worked for you so that we come up with a better answer.
This command worked for me:
$ sudo pip install --upgrade msgpack-python
I am new at OpenERP and I just installed OpenERP 7.0 on Ubuntu 12.04 using the All-In-One ".deb" file. But when I tried to open it it gave me this error message:
Internal Server Error: The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
I checked the "openerp-server.log" file and it gave me this:
self.gen.next()
File "/usr/share/pyshared/openerp/addons/web/http.py", line 422, in session_context
session_store.save(request.session)
File "/usr/share/pyshared/werkzeug/contrib/sessions.py", line 237, in savedir=self.path)
File "/usr/lib/python2.7/tempfile.py", line 300, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags)
File "/usr/lib/python2.7/tempfile.py", line 235, in _mkstemp_inner
fd = _os.open(file, flags, 0600)
OSError: [Errno 13] Permission non accordée: '/tmp/oe-sessions-openerp/tmpNUQsbf.__wz_sess'
What is going wrong and how can I fix it?
Thanks!
It looks like a permission issue. You can check permissions of your server/addons/web directory and change it's to Read/Write/Create/Delete like this
chmod 777 DIRPATH_OF_SERVER -R
chmod 777 DIRPATH_OF_ADDONS -R
chmod 777 DIRPATH_OF_WEB -R
By assigning all permissions, Can you re-check it ?