redis.exceptions.DataError: Invalid input of type: 'NoneType'. Convert to a byte, string or number first - redis

I've recently started to use Redis and RQ to run background processes. I built a Dash app which works fine on Heroku and used to work locally as well. Recently, I tried to test the same app locally again and I keep getting the following error - although I'm using exactly the same code hosted on Heroku:
redis.exceptions.DataError: Invalid input of type: 'NoneType'. Convert to a byte, string or number first.
In my requirements.txt and virtual env on Ubuntu 18.04 I have redis v.3.0.1, rq 0.13.0
When I run redis-server on my terminal I see that Redis 4.0.9 is used (that's also confusing to me).
I tried to google for two days looking for a solution with no avail.
Has anyone an idea of what might have happened and how to solve this error?
Here is the full relevant traceback:
File "/home/tom/dashenv/pb101_models/pages/cumulative_culture.py", line 1026, in stop_or_start_update
job = q.fetch_job(job_id)
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/rq/queue.py", line 142, in fetch_job
self.remove(job_id)
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/rq/queue.py", line 186, in remove
return self.connection.lrem(self.key, 1, job_id)
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/redis/client.py", line 1580, in lrem
return self.execute_command('LREM', name, count, value)
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/redis/client.py", line 754, in execute_command
connection.send_command(*args)
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/redis/connection.py", line 619, in send_command
self.send_packed_command(self.pack_command(*args))
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/redis/connection.py", line 659, in pack_command
for arg in imap(self.encoder.encode, args):
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/redis/connection.py", line 124, in encode
"byte, string or number first." % typename)
redis.exceptions.DataError: Invalid input of type: 'NoneType'. Convert to a byte, string or number first.
Thanks in advance for any suggestion/hint.
All best,
Tom

Check this link: redis 3.0
It says that the redis-py no longer accepts NoneType

Try json.dumps(None), that worked for me

Related

What is the recommended architecture for using amazon neptune in a scalable way?

I am building an application backed by a Neptune database. Because I want the application to be scalable, I am using AWS Lambda + API gateway to build a REST API to interact with the database. This seems to be a reasonable idea based on the fact that this use case is documented in the Neptune docs.
The Neptune docs recommend reusing the websocket connection to the database across the entire execution context of the function, which is what I am doing at the moment. The docs also recommend resetting the connection and retrying upon errors (see here), which I am also using. However, I am seeing exceptions every now and then (perhaps every 20 requests on average). One of the exceptions I get is
ConnectionResetError: Cannot write to closing transport
which seems to be the same as this issue.
The other one is:
Traceback (most recent call last):
File "/var/task/chalice/app.py", line 1685, in _get_view_function_response
response = view_function(**function_args)
File "/var/task/app.py", line 57, in resource
return Resource(app.current_request, g).process()
File "/var/task/backoff/_sync.py", line 94, in retry
ret = target(*args, **kwargs)
File "/var/task/chalicelib/handlers/resource.py", line 106, in get
values = resources.valueMap().with_(WithOptions.tokens).toList()
File "/var/task/gremlin_python/process/traversal.py", line 57, in toList
return list(iter(self))
File "/var/task/gremlin_python/process/traversal.py", line 47, in __next__
self.traversal_strategies.apply_strategies(self)
File "/var/task/gremlin_python/process/traversal.py", line 548, in apply_strategies
traversal_strategy.apply(traversal)
File "/var/task/gremlin_python/driver/remote_connection.py", line 63, in apply
remote_traversal = self.remote_connection.submit(traversal.bytecode)
File "/var/task/gremlin_python/driver/driver_remote_connection.py", line 60, in submit
results = result_set.all().result()
File "/var/lang/lib/python3.7/concurrent/futures/_base.py", line 435, in result
return self.__get_result()
File "/var/lang/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/var/task/gremlin_python/driver/resultset.py", line 90, in cb
f.result()
File "/var/lang/lib/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/var/lang/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/var/lang/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/var/task/gremlin_python/driver/connection.py", line 82, in _receive
data = self._transport.read()
File "/var/task/gremlin_python/driver/aiohttp/transport.py", line 104, in read
raise RuntimeError("Connection was already closed.")
RuntimeError: Connection was already closed.
In case it is relevant, I am using gremlingpython==3.5.1
It seems to me that these issues are all ultimately a consequence of using AWS Lambda, namely due to the mismatch between the longevity of websocket connections and the ephemeral nature of lambda execution contexts. The question then is: Am I doing the wrong thing by trying to use AWS lambda for my API? Would it be more appropriate to setup an EC2 instance and deal with the scalability in some other way?
P.S. Previously I did create and close a connection in every function execution (as previously recommended in the Neptune docs), which did work fine but was naturally slow.
The latest version of Neptune only supports Gremlin 3.4.11 (https://docs.aws.amazon.com/neptune/latest/userguide/engine-releases-1.0.5.1.html). I would start by using gremlin-python 3.4.11 and see if that resolves your issue. Gremlin-python 3.5 replaced Tornado with AIO HTTP (ref) for websocket connections and I suspect that change may be causing a slight change in behavior that a future release supporting Gremlin 3.5 will address.
I wonder whether the 'Connection was already closed' error message is not being treated as a retriable error by the retry logic?
What happens if you add this error message to the list of retriable_error_msgs in the Python example in the docs?

How to upgrade odoo 8 to odoo 9 database?

I am trying to upgrade an odoo installation from 8.0 to 9.0. What I've done so far is the following:
Backup the odoo database from the production system
Installed the backup DB as test in my current system
Copied the odoo folder in a folder on my system
Checked, if everything works. It works!
Updated to the latest v8.0 version, still works
Did a git checkout 9.0 followed by a git pull.
Started odoo 9.0 with the command ./openerp-server -d testDB -u all
This commands breaks with the following error and does not update my database:
LINE 1: select model, transient from ir_model where state='manual'
^
, in query select model, transient from ir_model where state=%s
2015-10-26 00:37:29,823 4501 CRITICAL testDB openerp.service.server:
Failed to initialize database `testDB`.
Traceback (most recent call last):
File "/opt/odoo/openerp/service/server.py", line 885, in preload_registries
registry = RegistryManager.new(dbname, update_module=update_module)
File "/opt/odoo/openerp/modules/registry.py", line 385, in new
openerp.modules.load_modules(registry._db, force_demo, status, update_module)
File "/opt/odoo/openerp/modules/loading.py", line 279, in load_modules
loaded_modules, processed_modules = load_module_graph(cr, graph, status, perform_checks=update_module, report=report)
File "/opt/odoo/openerp/modules/loading.py", line 136, in load_module_graph
registry.setup_models(cr, partial=True)
File "/opt/odoo/openerp/modules/registry.py", line 185, in setup_models
cr.execute('select model, transient from ir_model where state=%s', ('manual',))
File "/opt/odoo/openerp/sql_db.py", line 139, in wrapper
return f(self, *args, **kwargs)
File "/opt/odoo/openerp/sql_db.py", line 215, in execute
res = self._obj.execute(query, params)
ProgrammingError: column "transient" does not exist
LINE 1: select model, transient from ir_model where state='manual'
Are there any steps which I have to follow to upgrade the database or has everything to be done by hand? And if yes, what should I do? Obviously it failed because the specific column is non-existent in my database. But is there any update script because I fear, if I change this there will be the next error waiting for me.
Thanks in advance.
You can ask the odoo company to do that task for you by going to this link
.But they will charge money for that. If you can do it yourself here is the documentation on how to do that,
https://doc.therp.nl/openupgrade/intro.html
Option 2: We can use pgadmin(postgresql gui tool).Just select your database name and in the top you can see sql enabled,click it and issue an sql query to display all data(you must know the table name which you want to retreive) after that you can export it.The exported file contains all the data with column headings,we may have to rearrange columns according to odoo9 DB.Once it is done select odoo9 database,right click on the table name which you want to import data to and select import option.It may take a while and it should give message as "data imported successfully".
I found the answer on Github.
The trick is to create a field called transient which is Boolean with the default value false in the table ir_model.
As I expected, this is not the complete solution as there are other problem with the database needing adjustments.
You are trying to run a Odoo 8.0 database on Odoo 9.0.
The column 'transient' is in the code base for 9.0 and not in the 8.0 code base. Hence the 8.0 database is being ran on the 9.0 code base. Hence, the database has not been upgraded properly.
As stated in the previous answer. You can either get Odoo to do it or can do it yourself as well.

plone.app.blob RuntimeError while migrating ATFile

I'm doing a blob migration onto a 3.2.1 site and I'm getting "RuntimeError: maximum recursion depth exceeded while calling a Python object error." on some file during ##blob-file-migration.
I found this http://svn.eionet.europa.eu/projects/Zope/ticket/4190 and it looks like they solved this problem for images by creating a custom migrator.
Any clue? Traceback below.
File "/home/simahawk/dev/plone/plone3/projx/src/plone.app.blob/src/plone/app/blob/content.py", line 113, in setFile
mutator = self.getField('file').getMutator(self)
File "/home/simahawk/dev/plone/plone3/buildout/eggs/Products.Archetypes-1.5.10-py2.4.egg/Products/Archetypes/BaseObject.py", line 241, in getField
return self.Schema().get(key)
File "/home/simahawk/dev/plone/plone3/buildout/eggs/Products.Archetypes-1.5.10-py2.4.egg/Products/Archetypes/BaseObject.py", line 828, in Schema
schema = ISchema(self)
File "/home/simahawk/dev/plone/plone3/projx/parts/zope2/lib/python/zope/app/component/hooks.py", line 96, in adapter_hook
return siteinfo.adapter_hook(interface, object, name, default)
File "/home/simahawk/dev/plone/plone3/buildout/eggs/archetypes.schemaextender-2.1.1-py2.4.egg/archetypes/schemaextender/extender.py", line 143, in cachingInstanceSchemaFactory
key = IUUID(context, str(id(context)))
File "/home/simahawk/dev/plone/plone3/projx/parts/zope2/lib/python/zope/app/component/hooks.py", line 96, in adapter_hook
return siteinfo.adapter_hook(interface, object, name, default)
RuntimeError: maximum recursion depth exceeded in cmp
2013-03-06 10:16:49 INFO ATCT.migration Rolling back to last safe point
When migration from Plone 3.x to Plone 4.x using Products.contentmigration I got teh same error. It seemed there was a bug in plone.app.blob migration. We made this custom migration to bypass the recursion error: http://svn.eionet.europa.eu/projects/Zope/browser/trunk/Products.EEAPloneAdmin/trunk/Products/EEAPloneAdmin/Extensions/ImageFS2Image.py?rev=29656
The problem is at.schemaextender version (2.1.1). Down-pinning to 1.6.0 solved the issue. This also solved a random KeyError on a 3.3.5 site. I think this is related to #12051 and #11396. It looks like these are common problems with newer version of at.schemaextender but in the package's README there's no info for Plone 3.x.

POSKeyError during migration

Migration from plone 3.3.2 to plone 4.2.1 fails with PosKeyError. I've tried recipes from this article http://plonechix.blogspot.com/2009/12/definitive-guide-to-poskeyerror.html.
I've run error_finder snippet, but it didn't give me any execeptions. I've also tried to take object in debugger using app.mysite._p_jar[p64(oid)] - also no success, it fails with the same error.
How can I delete the broken object or at least get more info about object (e.g. its class name or location)?
Full traceback:
POSKeyError('\x00\x00\x00\x00\x00\x0ey=',)
(Also, the following error occurred while attempting to render the standard error message, please see the event log for full details:
An operation previously failed, with traceback:
File "/Users/makmak/Plone/buildout-cache/eggs/Zope2-2.13.16-py2.7.egg/ZServer/PubCore/ZServerPublisher.py", line 31, in __init__
response=b)
File "/Users/makmak/Plone/buildout-cache/eggs/Zope2-2.13.16-py2.7.egg/ZPublisher/Publish.py", line 443, in publish_module
environ, debug, request, response)
File "/Users/makmak/Plone/buildout-cache/eggs/Zope2-2.13.16-py2.7.egg/ZPublisher/Publish.py", line 237, in publish_module_standard
response = publish(request, module_name, after_list, debug=debug)
File "/Users/makmak/Plone/buildout-cache/eggs/Zope2-2.13.16-py2.7.egg/ZPublisher/Publish.py", line 134, in publish
transactions_manager.commit()
File "/Users/makmak/Plone/buildout-cache/eggs/Zope2-2.13.16-py2.7.egg/Zope2/App/startup.py", line 301, in commit
transaction.commit()
File "/Users/makmak/Plone/buildout-cache/eggs/transaction-1.1.1-py2.7.egg/transaction/_manager.py", line 89, in commit
return self.get().commit()
File "/Users/makmak/Plone/buildout-cache/eggs/transaction-1.1.1-py2.7.egg/transaction/_transaction.py", line 336, in commit
t, v, tb = self._saveAndGetCommitishError()
File "/Users/makmak/Plone/buildout-cache/eggs/transaction-1.1.1-py2.7.egg/transaction/_transaction.py", line 329, in commit
self._commitResources()
File "/Users/makmak/Plone/buildout-cache/eggs/transaction-1.1.1-py2.7.egg/transaction/_transaction.py", line 443, in _commitResources
rm.commit(self)
File "/Users/makmak/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-macosx-10.4-x86_64.egg/ZODB/Connection.py", line 572, in commit
oid, serial, transaction)
File "/Users/makmak/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-macosx-10.4-x86_64.egg/ZODB/BaseStorage.py", line 416, in checkCurrentSerialInTransaction
committed_tid = self.getTid(oid)
File "/Users/makmak/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-macosx-10.4-x86_64.egg/ZODB/FileStorage/FileStorage.py", line 770, in getTid
with self._lock:
File "/Users/makmak/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-macosx-10.4-x86_64.egg/ZODB/FileStorage/FileStorage.py", line 403, in _lookup_pos
raise POSKeyError(oid)
POSKeyError: 0x0e793d
You can use the fsrefs.py to find the bad object.
A very short article on using it is: http://nathanvangheem.com/news/fixing-broken-zodb-object-references
I believe this is the same issue I just ran into which happens if a savepoint is rolled back that included adding an object to the catalog. I think this is a bug in the ZODB but you can workaround it by addressing whatever is rolling back a savepoint, in this case that's the migration of files and images to blobs. So if you fix what's keeping those files or images from successfully migrating to BLOBs (or just delete them) then it should succeed.

How do I set a backend for django-celery. I set CELERY_RESULT_BACKEND, but it is not recognized

I set CELERY_RESULT_BACKEND = "amqp" in celeryconfig.py
but I get:
>>> from tasks import add
>>> result = add.delay(3,5)
>>> result.ready()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/result.py", line 105, in ready
return self.state in self.backend.READY_STATES
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/result.py", line 184, in state
return self.backend.get_status(self.task_id)
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/backends/base.py", line 414, in _is_disabled
raise NotImplementedError("No result backend configured. "
NotImplementedError: No result backend configured. Please see the documentation for more information.
I just went through this so I can shed some light on this. One might think for all of the great documentation stating some of this would have been a bit more obvious.
I'll assume you have both RabbitMQ up and functioning (it needs to be running), and that you have dj-celery installed.
Once you have that then all you need to do is to include this single line in your setting.py file.
BROKER_URL = "amqp://guest:guest#localhost:5672//"
Then you need to run syncdb and start this thing up using:
python manage.py celeryd -E -B --loglevel=info
The -E states that you want events captured and the -B states you want celerybeats running. The former enable you to actually see something in the admin window and the later allows you to schedule. Finally you need to ensure that you are actually going to capture the events and the status. So in another terminal run this:
./manage.py celerycam
And then finally your able to see the working example provided in the docs.. -- Again assuming you created the tasks.py that is says to.
>>> result = add.delay(4, 4)
>>> result.ready() # returns True if the task has finished processing.
False
>>> result.result # task is not ready, so no return value yet.
None
>>> result.get() # Waits until the task is done and returns the retval.
8
>>> result.result # direct access to result, doesn't re-raise errors.
8
>>> result.successful() # returns True if the task didn't end in failure.
True
Furthermore then you are able to view your status in the admin panel.
I hope this helps!! I would add one more thing which helped me. Watching the RabbitMQ Log file was key as it helped me identify that django-celery was actually talking to RabbitMQ.
Are you running django celery?
If so, you need to start a python shell in the context of django (or whatever the technical term is).
Type:
python manage.py shell
And try your commands from that shell
HI tried everything to work celery v3.1.25 with Django 1.8 version nothing worked..
Finally below line helped me ,feeling happy
app = Celery('documents',backend="celery.backends.amqp:AMQPBackend")
Setting backend="celery.backends.amqp:AMQPBackend" fixed my error.