I'm doing a migration from sqlite to oracle backend. The oracle database already exists and is maintained by other people. Its version is Oracle9i Enterprise Edition Release 9.2.0.1.0.
I have a simple model:
class AliasType(models.Model):
id = models.AutoField(primary_key=True, db_column="F_ALIAS_ID")
name = models.CharField(u"Type name", max_length=255, unique=True, db_column="F_ALIAS_NAME")
class Meta:
db_table = "ALIAS"
./manage.py syncdb does not return any errors. But when I try to create a new instance and save it to the database, I get the following error:
>>> AliasType.objects.create(name="test")
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/mnt/Data/private/projects/envs/termary-oracle/src/django/django/db/models/manager.py", line 138, in create
return self.get_query_set().create(**kwargs)
File "/mnt/Data/private/projects/envs/termary-oracle/src/django/django/db/models/query.py", line 360, in create
obj.save(force_insert=True, using=self.db)
File "/mnt/Data/private/projects/envs/termary-oracle/src/django/django/db/models/base.py", line 460, in save
self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "/mnt/Data/private/projects/envs/termary-oracle/src/django/django/db/models/base.py", line 553, in save_base
result = manager._insert(values, return_id=update_pk, using=using)
File "/mnt/Data/private/projects/envs/termary-oracle/src/django/django/db/models/manager.py", line 195, in _insert
return insert_query(self.model, values, **kwargs)
File "/mnt/Data/private/projects/envs/termary-oracle/src/django/django/db/models/query.py", line 1435, in insert_query
return query.get_compiler(using=using).execute_sql(return_id)
File "/mnt/Data/private/projects/envs/termary-oracle/src/django/django/db/models/sql/compiler.py", line 791, in execute_sql
cursor = super(SQLInsertCompiler, self).execute_sql(None)
File "/mnt/Data/private/projects/envs/termary-oracle/src/django/django/db/models/sql/compiler.py", line 735, in execute_sql
cursor.execute(sql, params)
File "/mnt/Data/private/projects/envs/termary-oracle/src/django/django/db/backends/util.py", line 18, in execute
return self.cursor.execute(sql, params)
File "/mnt/Data/private/projects/envs/termary-oracle/src/django/django/db/backends/oracle/base.py", line 630, in execute
return self.cursor.execute(query, self._param_generator(params))
IntegrityError: ORA-01400: cannot insert NULL into ("SINCE"."ALIAS"."F_ALIAS_ID")
If I specify id, e.g. AliasType.objects.create(id=5, name="test"), it works. I thought django should be able to retrieve id value automatically. I've learnt that Oracle does not support autoincrement, and I should use triggers and sequences. I was told that there is an existing sequence in the database that returns ids for all new rows, and I know its name, say SEQ_GET_NEW_ID.
So the question is how to implement that in the most elegant way, i.e. how to tell Django to get id values for all new objects from the sequence named SEQ_GET_NEW_ID without hacking it too much (e.g. overriding save() methods for all models)?
There is a ticket open (#1946) to allow exactly that, overriding the default sequence name. But as it's not closed yet, I don't think there is a way without hacking.
I haven't used Oracle before, but a quick search suggests that it is possible to create aliases/synonyms for sequences. manage.py sqlall <app> should show you the sequence name Django is expecting. So you probably could just make this an alias for SEQ_GET_NEW_ID.
Related
I am building an application backed by a Neptune database. Because I want the application to be scalable, I am using AWS Lambda + API gateway to build a REST API to interact with the database. This seems to be a reasonable idea based on the fact that this use case is documented in the Neptune docs.
The Neptune docs recommend reusing the websocket connection to the database across the entire execution context of the function, which is what I am doing at the moment. The docs also recommend resetting the connection and retrying upon errors (see here), which I am also using. However, I am seeing exceptions every now and then (perhaps every 20 requests on average). One of the exceptions I get is
ConnectionResetError: Cannot write to closing transport
which seems to be the same as this issue.
The other one is:
Traceback (most recent call last):
File "/var/task/chalice/app.py", line 1685, in _get_view_function_response
response = view_function(**function_args)
File "/var/task/app.py", line 57, in resource
return Resource(app.current_request, g).process()
File "/var/task/backoff/_sync.py", line 94, in retry
ret = target(*args, **kwargs)
File "/var/task/chalicelib/handlers/resource.py", line 106, in get
values = resources.valueMap().with_(WithOptions.tokens).toList()
File "/var/task/gremlin_python/process/traversal.py", line 57, in toList
return list(iter(self))
File "/var/task/gremlin_python/process/traversal.py", line 47, in __next__
self.traversal_strategies.apply_strategies(self)
File "/var/task/gremlin_python/process/traversal.py", line 548, in apply_strategies
traversal_strategy.apply(traversal)
File "/var/task/gremlin_python/driver/remote_connection.py", line 63, in apply
remote_traversal = self.remote_connection.submit(traversal.bytecode)
File "/var/task/gremlin_python/driver/driver_remote_connection.py", line 60, in submit
results = result_set.all().result()
File "/var/lang/lib/python3.7/concurrent/futures/_base.py", line 435, in result
return self.__get_result()
File "/var/lang/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/var/task/gremlin_python/driver/resultset.py", line 90, in cb
f.result()
File "/var/lang/lib/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/var/lang/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/var/lang/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/var/task/gremlin_python/driver/connection.py", line 82, in _receive
data = self._transport.read()
File "/var/task/gremlin_python/driver/aiohttp/transport.py", line 104, in read
raise RuntimeError("Connection was already closed.")
RuntimeError: Connection was already closed.
In case it is relevant, I am using gremlingpython==3.5.1
It seems to me that these issues are all ultimately a consequence of using AWS Lambda, namely due to the mismatch between the longevity of websocket connections and the ephemeral nature of lambda execution contexts. The question then is: Am I doing the wrong thing by trying to use AWS lambda for my API? Would it be more appropriate to setup an EC2 instance and deal with the scalability in some other way?
P.S. Previously I did create and close a connection in every function execution (as previously recommended in the Neptune docs), which did work fine but was naturally slow.
The latest version of Neptune only supports Gremlin 3.4.11 (https://docs.aws.amazon.com/neptune/latest/userguide/engine-releases-1.0.5.1.html). I would start by using gremlin-python 3.4.11 and see if that resolves your issue. Gremlin-python 3.5 replaced Tornado with AIO HTTP (ref) for websocket connections and I suspect that change may be causing a slight change in behavior that a future release supporting Gremlin 3.5 will address.
I wonder whether the 'Connection was already closed' error message is not being treated as a retriable error by the retry logic?
What happens if you add this error message to the list of retriable_error_msgs in the Python example in the docs?
My streaming job is now failing with the below error, streaming job worked fine for almost 2 months, and it is completely stateless transformation and just needs to append the new rows to the destination delta table. Before streaming, I'm manually providing the schema to a csv files, even verified the streaming job schema and downstream table schema both matches perfectly along with the datatype.
Not sure, why even in the stateless transformation, I'm getting the below error. Any help would be appreciated.
File "/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 2442, in _call_proxy
return_value = getattr(self.pool[obj_id], method)(*params)
File "/databricks/spark/python/pyspark/sql/utils.py", line 195, in call
raise e
File "/databricks/spark/python/pyspark/sql/utils.py", line 192, in call
self.func(DataFrame(jdf, self.sql_ctx), batch_id)
File "<command-422857213447422>", line 2, in write_to_managed_table
print(f"inside foreachBatch for batch_id:{batchId}, rows in passed dataframe: {micro_batch_df.count()}")
File "/databricks/spark/python/pyspark/sql/dataframe.py", line 670, in count
return int(self._jdf.count())
File "/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
return_value = get_return_value(
File "/databricks/spark/python/pyspark/sql/utils.py", line 110, in deco
return f(*a, **kw)
File "/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o433.count.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 28 in stage 13792.0
failed 4 times, most recent failure: Lost task 28.3 in stage 13792.0 (TID 752198)
(10.139.64.13 executor 45):
org.apache.spark.sql.execution.streaming.state.StateSchemaNotCompatible: Provided schema
doesn't match to the schema for existing state! Please note that Spark allow difference of
field name: check count of fields and data type of each field.
There might a problem with the CSV file, it could be corrupted.
You can ignore this csv file by setting the "mode" option to "PERMISSIVE" or "DROPMALFORMED".
mode (default PERMISSIVE): allows a mode for dealing with corrupt records during parsing.
PERMISSIVE : sets other fields to null when it meets a corrupted record. When a schema is set by user, it sets null for extra fields.
DROPMALFORMED : ignores the whole corrupted records.
FAILFAST : throws an exception when it meets corrupted records.
https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/streaming/DataStreamReader.html#csv(path:String):org.apache.spark.sql.DataFrame
spark.read.format("csv")
.option("header,"true")
.option("path","your.csv")
.option("mode","DROPMALFORMED")
.schema(csvSchema)
.load()
I am trying to upgrade an odoo installation from 8.0 to 9.0. What I've done so far is the following:
Backup the odoo database from the production system
Installed the backup DB as test in my current system
Copied the odoo folder in a folder on my system
Checked, if everything works. It works!
Updated to the latest v8.0 version, still works
Did a git checkout 9.0 followed by a git pull.
Started odoo 9.0 with the command ./openerp-server -d testDB -u all
This commands breaks with the following error and does not update my database:
LINE 1: select model, transient from ir_model where state='manual'
^
, in query select model, transient from ir_model where state=%s
2015-10-26 00:37:29,823 4501 CRITICAL testDB openerp.service.server:
Failed to initialize database `testDB`.
Traceback (most recent call last):
File "/opt/odoo/openerp/service/server.py", line 885, in preload_registries
registry = RegistryManager.new(dbname, update_module=update_module)
File "/opt/odoo/openerp/modules/registry.py", line 385, in new
openerp.modules.load_modules(registry._db, force_demo, status, update_module)
File "/opt/odoo/openerp/modules/loading.py", line 279, in load_modules
loaded_modules, processed_modules = load_module_graph(cr, graph, status, perform_checks=update_module, report=report)
File "/opt/odoo/openerp/modules/loading.py", line 136, in load_module_graph
registry.setup_models(cr, partial=True)
File "/opt/odoo/openerp/modules/registry.py", line 185, in setup_models
cr.execute('select model, transient from ir_model where state=%s', ('manual',))
File "/opt/odoo/openerp/sql_db.py", line 139, in wrapper
return f(self, *args, **kwargs)
File "/opt/odoo/openerp/sql_db.py", line 215, in execute
res = self._obj.execute(query, params)
ProgrammingError: column "transient" does not exist
LINE 1: select model, transient from ir_model where state='manual'
Are there any steps which I have to follow to upgrade the database or has everything to be done by hand? And if yes, what should I do? Obviously it failed because the specific column is non-existent in my database. But is there any update script because I fear, if I change this there will be the next error waiting for me.
Thanks in advance.
You can ask the odoo company to do that task for you by going to this link
.But they will charge money for that. If you can do it yourself here is the documentation on how to do that,
https://doc.therp.nl/openupgrade/intro.html
Option 2: We can use pgadmin(postgresql gui tool).Just select your database name and in the top you can see sql enabled,click it and issue an sql query to display all data(you must know the table name which you want to retreive) after that you can export it.The exported file contains all the data with column headings,we may have to rearrange columns according to odoo9 DB.Once it is done select odoo9 database,right click on the table name which you want to import data to and select import option.It may take a while and it should give message as "data imported successfully".
I found the answer on Github.
The trick is to create a field called transient which is Boolean with the default value false in the table ir_model.
As I expected, this is not the complete solution as there are other problem with the database needing adjustments.
You are trying to run a Odoo 8.0 database on Odoo 9.0.
The column 'transient' is in the code base for 9.0 and not in the 8.0 code base. Hence the 8.0 database is being ran on the 9.0 code base. Hence, the database has not been upgraded properly.
As stated in the previous answer. You can either get Odoo to do it or can do it yourself as well.
I'm trying to use twisted to handle data generated by a binary (which indefinitely dumps lines onto stdout). Since by data is inherently line-delimited, I was trying to used the LineReciever instead of trying to parse data. The following is the relevant bit of the code which seems to be causing trouble :
class ProtocolBareQDAL41xB(ProcessProtocol, LineReceiver):
...
def outReceived(self, data):
print "Got Data:" + repr(data)
self.dataReceived(data)
def lineReceived(self, line):
print "Got Line: " + line
self._process_line(line)
...
This 'works' for the first of two lines in the output. I don't know yet if it works for only one line, or if it works for all but the last line. The resulting output looks something like :
$ python BareQDAL41xB.py
Made Connection
<Process pid=16486 status=-1>
Got Data:'No device found!\nMultiple devices found! Please connect only one.\n'
Got Line: No device found!
Got Serial Number : found!
Unhandled Error
Traceback (most recent call last):
File "/media/ldata/code/virtualenvs/tendril/local/lib/python2.7/site-packages/twisted/python/log.py", line 101, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/media/ldata/code/virtualenvs/tendril/local/lib/python2.7/site-packages/twisted/python/log.py", line 84, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/media/ldata/code/virtualenvs/tendril/local/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/media/ldata/code/virtualenvs/tendril/local/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
return func(*args,**kw)
--- <exception caught here> ---
File "/media/ldata/code/virtualenvs/tendril/local/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 597, in _doReadOrWrite
why = selectable.doRead()
File "/media/ldata/code/virtualenvs/tendril/local/lib/python2.7/site-packages/twisted/internet/process.py", line 274, in doRead
return fdesc.readFromFD(self.fd, self.dataReceived)
File "/media/ldata/code/virtualenvs/tendril/local/lib/python2.7/site-packages/twisted/internet/fdesc.py", line 94, in readFromFD
callback(output)
File "/media/ldata/code/virtualenvs/tendril/local/lib/python2.7/site-packages/twisted/internet/process.py", line 277, in dataReceived
self.proc.childDataReceived(self.name, data)
File "/media/ldata/code/virtualenvs/tendril/local/lib/python2.7/site-packages/twisted/internet/process.py", line 931, in childDataReceived
self.proto.childDataReceived(name, data)
File "/media/ldata/code/virtualenvs/tendril/local/lib/python2.7/site-packages/twisted/internet/protocol.py", line 604, in childDataReceived
self.outReceived(data)
File "BareQDAL41xB.py", line 104, in outReceived
self.dataReceived(data)
File "/media/ldata/code/virtualenvs/tendril/local/lib/python2.7/site-packages/twisted/protocols/basic.py", line 573, in dataReceived
self.transport.disconnecting):
exceptions.AttributeError: 'Process' object has no attribute 'disconnecting'
processExited, status 0
processEnded, status 0
LineReciever seems to be expecting the transport to implement disconnecting.
Is it possible to use twisted's LineReciever with twisted's ProcessProtocol, or should I implement the line parser in my protocol instead?
LineReceiver is already a Protocol, which implements different interfaces than IProcessProtocol.
Luckily, recent versions of Twisted already contain an adapter that does what you want - which is to treat a subprocess as a stream of bytes. Rather than calling spawnProcess directly, use ProcessEndpoint, and you can pass a regular ProtocolFactory, no ProcessProtocol involved.
However, as a commenter has already pointed out, there's a bug here, where the disconnecting attribute is not formally part of ITransport, but LineReceiver (and LineOnlyReceiver) depend on it anyway, and since it's not part of the interface, ProcessEndpoint doesn't implement it. That should definitely be fixed, but in the meanwhile, we'll need to work around it.
As a happy accident, Twisted's built-in support for wrapping protocols, WrappingFactory, already has support for the disconnecting attribute, specifically because of this ugly disparity between the theory of the interface specifications and the reality of the most popular ITransport implementations. So even a do-nothing wrapper will work around the problem. You can implement this like so:
from zope.interface import implementer
from twisted.internet.interfaces import IStreamClientEndpoint
from twisted.protocols.policies import WrappingFactory
#implementer(IStreamClientEndpoint)
class DisconnectingWorkaroundEndpoint(object):
def __init__(self, endpoint):
self._endpoint = endpoint
def connect(self, protocolFactory):
return self._endpoint.connect(WrappingFactory(protocolFactory))
and then when you construct your ProcessEndpoint, do:
endpoint = DisconnectingWorkaroundEndpoint(ProcessEndpoint(...))
Sorry for the delay on answering; while you've probably worked out your own workaround, I hope this will be useful to others with the same question!
Migration from plone 3.3.2 to plone 4.2.1 fails with PosKeyError. I've tried recipes from this article http://plonechix.blogspot.com/2009/12/definitive-guide-to-poskeyerror.html.
I've run error_finder snippet, but it didn't give me any execeptions. I've also tried to take object in debugger using app.mysite._p_jar[p64(oid)] - also no success, it fails with the same error.
How can I delete the broken object or at least get more info about object (e.g. its class name or location)?
Full traceback:
POSKeyError('\x00\x00\x00\x00\x00\x0ey=',)
(Also, the following error occurred while attempting to render the standard error message, please see the event log for full details:
An operation previously failed, with traceback:
File "/Users/makmak/Plone/buildout-cache/eggs/Zope2-2.13.16-py2.7.egg/ZServer/PubCore/ZServerPublisher.py", line 31, in __init__
response=b)
File "/Users/makmak/Plone/buildout-cache/eggs/Zope2-2.13.16-py2.7.egg/ZPublisher/Publish.py", line 443, in publish_module
environ, debug, request, response)
File "/Users/makmak/Plone/buildout-cache/eggs/Zope2-2.13.16-py2.7.egg/ZPublisher/Publish.py", line 237, in publish_module_standard
response = publish(request, module_name, after_list, debug=debug)
File "/Users/makmak/Plone/buildout-cache/eggs/Zope2-2.13.16-py2.7.egg/ZPublisher/Publish.py", line 134, in publish
transactions_manager.commit()
File "/Users/makmak/Plone/buildout-cache/eggs/Zope2-2.13.16-py2.7.egg/Zope2/App/startup.py", line 301, in commit
transaction.commit()
File "/Users/makmak/Plone/buildout-cache/eggs/transaction-1.1.1-py2.7.egg/transaction/_manager.py", line 89, in commit
return self.get().commit()
File "/Users/makmak/Plone/buildout-cache/eggs/transaction-1.1.1-py2.7.egg/transaction/_transaction.py", line 336, in commit
t, v, tb = self._saveAndGetCommitishError()
File "/Users/makmak/Plone/buildout-cache/eggs/transaction-1.1.1-py2.7.egg/transaction/_transaction.py", line 329, in commit
self._commitResources()
File "/Users/makmak/Plone/buildout-cache/eggs/transaction-1.1.1-py2.7.egg/transaction/_transaction.py", line 443, in _commitResources
rm.commit(self)
File "/Users/makmak/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-macosx-10.4-x86_64.egg/ZODB/Connection.py", line 572, in commit
oid, serial, transaction)
File "/Users/makmak/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-macosx-10.4-x86_64.egg/ZODB/BaseStorage.py", line 416, in checkCurrentSerialInTransaction
committed_tid = self.getTid(oid)
File "/Users/makmak/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-macosx-10.4-x86_64.egg/ZODB/FileStorage/FileStorage.py", line 770, in getTid
with self._lock:
File "/Users/makmak/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-macosx-10.4-x86_64.egg/ZODB/FileStorage/FileStorage.py", line 403, in _lookup_pos
raise POSKeyError(oid)
POSKeyError: 0x0e793d
You can use the fsrefs.py to find the bad object.
A very short article on using it is: http://nathanvangheem.com/news/fixing-broken-zodb-object-references
I believe this is the same issue I just ran into which happens if a savepoint is rolled back that included adding an object to the catalog. I think this is a bug in the ZODB but you can workaround it by addressing whatever is rolling back a savepoint, in this case that's the migration of files and images to blobs. So if you fix what's keeping those files or images from successfully migrating to BLOBs (or just delete them) then it should succeed.