I am trying to install "Gantt chart" in Odoo in my projects and I got this error, anyone have a solution?
Traceback (most recent call last):
File "/home/odooadmin/src/odoo/odoo/http.py", line 639, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/home/odooadmin/src/odoo/odoo/http.py", line 315, in _handle_exception
raise exception.with_traceback(None) from new_cause
TypeError: unsupported operand type(s) for -: 'datetime.date' and 'datetime.datetime'
Related
in a brand new installation of Odoo 14 I have installed the "web-responsive" module (in the OCA "web" repository)
in the menu screen, a couple of icons are missing (as shown in the attached picture)
AND I am seeing these messages in the log
2022-12-06 10:06:39,035 4436 INFO sperim odoo.addons.base.models.ir_attachment: _read_file reading /home/me/deployment_sc/data_dir/filestore/sperim/c5/c54d3d5e2b1320083bf5378b7c195b0985fa04c1
Traceback (most recent call last):
File "/home/me/odoo/odoo/odoo/api.py", line 789, in get
field_cache = field_cache[record.env.cache_key(field)]
KeyError: (None,)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/me/odoo/odoo/odoo/fields.py", line 970, in __get__
value = env.cache.get(record, self)
File "/home/me/odoo/odoo/odoo/api.py", line 793, in get
raise CacheMiss(record, field)
odoo.exceptions.CacheMiss: 'res.users(2,).image_128'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/me/odoo/odoo/odoo/api.py", line 789, in get
field_cache = field_cache[record.env.cache_key(field)]
KeyError: (None,)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/me/odoo/odoo/odoo/fields.py", line 970, in __get__
value = env.cache.get(record, self)
File "/home/me/odoo/odoo/odoo/api.py", line 793, in get
raise CacheMiss(record, field)
odoo.exceptions.CacheMiss: 'res.partner(3,).image_128'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/me/odoo/odoo/odoo/api.py", line 789, in get
field_cache = field_cache[record.env.cache_key(field)]
KeyError: (None, None)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/me/odoo/odoo/odoo/fields.py", line 970, in __get__
value = env.cache.get(record, self)
File "/home/me/odoo/odoo/odoo/api.py", line 793, in get
raise CacheMiss(record, field)
odoo.exceptions.CacheMiss: 'ir.attachment(10,).datas'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/me/odoo/odoo/odoo/api.py", line 789, in get
field_cache = field_cache[record.env.cache_key(field)]
KeyError: (None,)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/me/odoo/odoo/odoo/fields.py", line 970, in __get__
value = env.cache.get(record, self)
File "/home/me/odoo/odoo/odoo/api.py", line 793, in get
raise CacheMiss(record, field)
odoo.exceptions.CacheMiss: 'ir.attachment(10,).raw'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/me/odoo/odoo/odoo/addons/base/models/ir_attachment.py", line 105, in _file_read
with open(full_path, 'rb') as f:
FileNotFoundError: [Errno 2] File o directory non esistente: '/home/me/deployment_sc/data_dir/filestore/sperim/c5/c54d3d5e2b1320083bf5378b7c195b0985fa04c1'
So, I understand a file is being searched and not found
But I don't understand why
As far as I can tell, these files are supposed to be static resources of the web-responsive module or of some other module distributed with Odoo
Why aren't they found ?
Try upgrading all the modules with -u all from terminal
Or try upgrading your 'base' module.
If your database is restored from another file, try importing your file store as well.
I am testing Yolo-v3 (https://github.com/experiencor/keras-yolo3) with tensorflow-gpu 1.15 an keras 2.3.1. The training process is started by:
runfile("train.py",'-c config.json')
Here are the printed out messages:
Using TensorFlow backend.
WARNING:tensorflow:From train.py:40: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.
valid_annot_folder not exists. Spliting the trainining set.
Seen labels: {'kangaroo': 266}
Given labels: ['kangaroo']
Training on: ['kangaroo']
WARNING:tensorflow:From C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
.....
Loading pretrained weights.
C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\keras\callbacks\callbacks.py:998: UserWarning: `epsilon` argument is deprecated and will be removed, use `min_delta` instead.
warnings.warn('`epsilon` argument is deprecated and '
Traceback (most recent call last):
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\tensorflow_core\python\client\session.py", line 1365, in _do_call
return fn(*args)
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\tensorflow_core\python\client\session.py", line 1348, in _run_fn
self._extend_graph()
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\tensorflow_core\python\client\session.py", line 1388, in _extend_graph
tf_session.ExtendSession(self._session)
InvalidArgumentError: Cannot assign a device for operation replica_0/lambda_1/Shape: {{node replica_0/lambda_1/Shape}} was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device.
[[replica_0/lambda_1/Shape]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 305, in <module>
_main_(args)
File "train.py", line 282, in _main_
max_queue_size = 8
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\keras\engine\training.py", line 1732, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\keras\engine\training_generator.py", line 42, in fit_generator
model._make_train_function()
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\keras\engine\training.py", line 333, in _make_train_function
**self._function_kwargs)
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\keras\backend\tensorflow_backend.py", line 3006, in function
v1_variable_initialization()
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\keras\backend\tensorflow_backend.py", line 420, in v1_variable_initialization
session = get_session()
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\keras\backend\tensorflow_backend.py", line 385, in get_session
return tf_keras_backend.get_session()
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\tensorflow_core\python\keras\backend.py", line 486, in get_session
_initialize_variables(session)
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\tensorflow_core\python\keras\backend.py", line 903, in _initialize_variables
[variables_module.is_variable_initialized(v) for v in candidate_vars])
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
run_metadata_ptr)
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
run_metadata)
File "C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
InvalidArgumentError: Cannot assign a device for operation replica_0/lambda_1/Shape: node replica_0/lambda_1/Shape (defined at C:\Users\Dy\Anaconda3\envs\tf1x\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device.
[[replica_0/lambda_1/Shape]]
I don't understand what caused the InvalidArgumentError. Is my tensoflow-gpu not installed correctly? Or there is some conflict in deploying gpu?
Try changing the "gpus" value to "0" if it is anythong else. It should work if you are executing in GPU.
I'm trying to convert frozen graph to json file. I use this command:
tensorflowjs_converter --input_format=tf_frozen_model --output_node_names="SemanticPredictions" --saved_model_tags=serve frozen_inference_graph.pb mymodal
But it gives this error:
Traceback (most recent call last):
File "d:\programdata\anaconda3\envs\tensorflow0\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\ProgramData\Anaconda3\envs\tensorflow0\Scripts\tensorflowjs_converter.exe\__main__.py", line 7, in <module>
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\converter.py", line 645, in pip_main
main([' '.join(sys.argv[1:])])
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\converter.py", line 649, in main
convert(argv[0].split(' '))
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\converter.py", line 632, in convert
strip_debug_ops=args.strip_debug_ops)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 379, in convert_tf_frozen_model
strip_debug_ops=strip_debug_ops)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 133, in optimize_graph
graph.add_to_collection('train_op', graph.get_operation_by_name(name))
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3633, in get_operation_by_name
return self.as_graph_element(name, allow_tensor=False, allow_operation=True)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3505, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "d:\programdata\anaconda3\envs\tensorflow0\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3565, in _as_graph_element_locked
"graph." % repr(name))
KeyError: "The name 'SemanticPredictions' refers to an Operation not in the graph."
I don't why it gives KeyError: "The name 'SemanticPredictions' refers to an Operation not in the graph." error.
Odoo Server Error:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/openerp/http.py", line 648, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/usr/lib/python2.7/dist-packages/openerp/http.py", line 685, in dispatch
...............
return api.model(lambda model: field.convert_to_write(value(model)))
File "/usr/lib/python2.7/dist-packages/openerp/fields.py", line 1719, in convert_to_write
return value.id
AttributeError: 'NoneType' object has no attribute 'id'
I get this error everytime I install the system app. Why am I encountering it and how can I avoid it?
I'm trying to get the google assistant sdk sample running on a pi but after receiving the authorization URL by using
google-oauthlib-tool --client-secrets /home/pi/client_secret_<my-id>.json --scope https://www.googleapis.com/auth/assistant-sdk-prototype --save --headless
I enter the code from the provided url and get this dump:
Traceback (most recent call last):
File "/home/pi/env/lib/python3.4/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/home/pi/env/lib/python3.4/site-packages/urllib3/connectionpool.py", line 345, in _make_request
self._validate_conn(conn)
File "/home/pi/env/lib/python3.4/site-packages/urllib3/connectionpool.py", line 844, in _validate_conn
conn.connect()
File "/home/pi/env/lib/python3.4/site-packages/urllib3/connection.py", line 326, in connect
ssl_context=context)
File "/home/pi/env/lib/python3.4/site-packages/urllib3/util/ssl_.py", line 325, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/lib/python3.4/ssl.py", line 364, in wrap_socket
_context=self)
File "/usr/lib/python3.4/ssl.py", line 577, in __init__
self.do_handshake()
File "/usr/lib/python3.4/ssl.py", line 804, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/env/lib/python3.4/site-packages/requests/adapters.py", line 440, in send
timeout=timeout
File "/home/pi/env/lib/python3.4/site-packages/urllib3/connectionpool.py", line 630, in urlopen
raise SSLError(e)
urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/env/bin/google-oauthlib-tool", line 11, in <module>
sys.exit(main())
File "/home/pi/env/lib/python3.4/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/home/pi/env/lib/python3.4/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/home/pi/env/lib/python3.4/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/pi/env/lib/python3.4/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/home/pi/env/lib/python3.4/site-packages/google_auth_oauthlib/tool/__main__.py", line 106, in main
creds = flow.run_console()
File "/home/pi/env/lib/python3.4/site-packages/google_auth_oauthlib/flow.py", line 358, in run_console
self.fetch_token(code=code)
File "/home/pi/env/lib/python3.4/site-packages/google_auth_oauthlib/flow.py", line 235, in fetch_token
**kwargs)
File "/home/pi/env/lib/python3.4/site-packages/requests_oauthlib/oauth2_session.py", line 221, in fetch_token
verify=verify, proxies=proxies)
File "/home/pi/env/lib/python3.4/site-packages/requests/sessions.py", line 560, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/home/pi/env/lib/python3.4/site-packages/requests_oauthlib/oauth2_session.py", line 360, in request
headers=headers, data=data, **kwargs)
File "/home/pi/env/lib/python3.4/site-packages/requests/sessions.py", line 513, in request
resp = self.send(prep, **send_kwargs)
File "/home/pi/env/lib/python3.4/site-packages/requests/sessions.py", line 623, in send
r = adapter.send(request, **kwargs)
File "/home/pi/env/lib/python3.4/site-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600)
I checked this post which had a similar issue and updated my clock accordingly but that didn't seem to be the problem. I have redownloaded all the libraries and tools and ensured python is up to date but I can't get this to work. Is there any solution to this issue?
It is also worth noting that this same trace occurs regardless of what i put in as the authorization code, even just putting in "x" produces the same output.
I was following the instructions at https://developers.google.com/assistant/sdk/develop/python/run-sample but I think those instructions forgot the step:
sudo apt-get install portaudio19-dev libffi-dev libssl-dev
Doing that then re-running google-oauthlib-tool worked.