Openerp7 migration to odoo 11 - migration

I am trying to migrate a database from Openerp7 to Odoo11.
I have done this before but now this database has custom fields and models which were created through UI not custom modules, and now makes a problem.
cona#cona-cons:~$ tailf /var/tmp/openupgrade/migration.log
result = method(recs, *args, **kwargs)
File "/var/tmp/openupgrade/8.0/server/openerp/models.py", line 3052, in _setup_fields
field.setup(self.env)
File "/var/tmp/openupgrade/8.0/server/openerp/fields.py", line 470, in setup
self._setup_regular(env)
File "/var/tmp/openupgrade/8.0/server/openerp/fields.py", line 1792, in _setup_regular
invf = comodel._fields[self.inverse_name]
KeyError: u'x_payment_line_id'
Is there any way to remove the custom models and fields (as they are not needed and get a clean db for migration?
Thanks

Related

odoo13 TypeError: Cannot create a consistent method resolution(Chatter to an existing model)

I got the problem ,can anybody help me? thanks
TypeError: Cannot create a consistent method resolution
class StockQuant(models.Model):
_name = 'stock.quant'
_inherit = ['stock.quant', 'mail.thread']
manifest.py
'category': 'Report',
'version': '0.1',
# any module necessary for this one to work correctly
'depends': ['base', 'sale', 'web','mail','product','stock'],
Error
File "odoo13\source\odoo\modules\registry.py", line 221, in load
model = cls._build_model(self, cr)
File "odoo13\source\odoo\models.py", line 504, in _build_model
ModelClass.__bases__ = tuple(bases)
TypeError: Cannot create a consistent method resolution
order (MRO) for bases Model, mail.thread, stock.quant, base
You have a loop of dependencies here, sale depends on sales_team that already depends on base and mail, stock already depends on product too.
You need to remove base, mail, and product.

How do you specify Project ID in the AWS Glue to BigQuery connector?

I'm trying to use the AWS Glue connector to BigQuery following the tutorial in https://aws.amazon.com/blogs/big-data/migrating-data-from-google-bigquery-to-amazon-s3-using-aws-glue-custom-connectors/ but after following all steps I get a:
: java.lang.IllegalArgumentException: A project ID is required for this service but could not be determined from the builder or the environment. Please set a project ID using the builder.
The Python exception shows that:
Traceback (most recent call last):
File "/tmp/ETHBlockchainExport.py", line 20, in <module>
DataSource0 = glueContext.create_dynamic_frame.from_options(connection_type = "marketplace.spark", connection_options =
{
"parentProject": "MYGOOGLE_PROJECT_ID",
"connectionName": "BigQuery",
"table": "MYPROJECT.DATASET.TABLE"
}
So everything seems provided but still complains about Project Id. How can I provide that info to the connector?
You specify in the data source connector option when you create the Glue job, as key-value pair.
from your log it seems that you included the project id in the table field as well, should be dataset.table
Also another possibility is that you didn't specify the values of the project ID and table values etc in environment variable (this seems to be more likely based on the error shown)
Example
Reference

programmatically rename file in ocrmyfile

I'm a new programmer and I'm making a first attempt at a larger data science project. To do this I have made a class that is supposed to open PDFs with ocrmypdf and then uses a while statement to walk through all the documents in a folder.
class DocumentReader:
This class is used to open and read a
# document using OCR and then
# creating the document in its place
def __init__(self,file):
self.file = file
def convert(self):
ocrmypdf.ocr(self.file,new_doc,deskew=True)
and here is the while statement:
count = 0
while count <final:
for file in os.listdir('PayStubs'):
if file.endswith(".pdf"):
index = str(file).find('.pdf')
new_doc = file[:index]+'_new'+file[index:]
d1=DocumentReader(file)
d1.convert()
I can make each of the classes work if I run them individually but it is the '.pdf' extension when I try to run them programmatically that is messing me up. Does anyone know how to create a new file name programmatically for the second argument in the ocrmypdf command?
I have tried several different ways of making this work but I keep getting errors. The most common errors that my attempts have yielded are:
InputFileError: File not found - 20070928ch6495.pdf.pdf
and
isadirectoryerror: [errno 21] is a directory: '_new/'
I'm to the point where I'm running in circles. Any help would be greatly appreciated. thanks!

NotImplementedError: Operator not Implemented in python 3 FDB module

I'm currently rewriting my simple program that needs to work with firebird databases.
Sadly, not all .fdb files are from the same version, some are even from versions below 2.0
in my database wrapper when executing connection i get this error:
File "C:\Python34\lib\site-packages\fdb\fbcore.py", line 715, in connect
dsn = b(dsn,_FS_ENCODING)
File "C:\Python34\lib\site-packages\fdb\ibase.py", line 47, in b
if st == None:
NotImplementedError: operator not implemented.
while method that throws this exception looks like:
def run(self):
self.conn = fdb.connect(dsn = self.kwargs["db_path"],
user = self.kwargs["username"],
password = 'masterkey',
role = 'RDB$ADMIN' )
self.cur = self.conn.cursor()
I'm using RDB$ADMIN role so i can add few extra accounts so each thread has his own account to work with.
previous iteration of this program used only one account and it worked perfectly.
This is my first time seeing NotImplementedError. and honestly i have no idea how to make it work.

scrapyd multiple spiders writing items to same file

I have scrapyd server with several spiders running at same time, I start the spiders one by one using the schedule.json endpoint. All spiders are writing contents on common file using a pipeline
class JsonWriterPipeline(object):
def __init__(self, json_filename):
# self.json_filepath = json_filepath
self.json_filename = json_filename
self.file = open(self.json_filename, 'wb')
#classmethod
def from_crawler(cls, crawler):
save_path='/tmp/'
json_filename=crawler.settings.get('json_filename', 'FM_raw_export.json')
completeName = os.path.join(save_path, json_filename)
return cls(
completeName
)
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "\n"
self.file.write(line)
return item
After the spiders are running I can see how they are collecting data correctly, items are stored in files XXXX.jl and the spiders works correctly, however the contents crawled are not reflected on common file. Spiders seems to work well however the pipeline is not doing well their job and is not collecting data into common file.
I also noticed that only one spider is writing at same time on file.
I don't see any good reason to do what you do :) You can change the json_filename setting by setting arguments on your scrapyd schedule.json Request. Then you can make each spider to generate slightly different files that you merge with post-processing or at query time. You can also write JSON files similar to what you have by just setting the FEED_URI value (example). If you write to single file simultaneously from multiple processes (especially when you open with 'wb' mode) you're looking for corrupt data.
Edit:
After understanding a bit better what you need - in this case - it's scrapyd starting multiple crawls running different spiders where each one crawls a different website. The consumer process is monitoring a single file continuously.
There are several solutions including:
named pipes
Relatively easy to implement and ok for very small Items only (see here)
RabbitMQ or some other queueing mechanism
Great solution but might be a bit of an overkill
A database e.g. SQLite based solution
Nice and simple but likely requires some coding (custom consumer)
A nice inotifywait-based or other filesystem monitoring solution
Nice and likely easy to implement
The last one seems like the most attractive option to me. When scrapy crawl finishes (spider_closed signal), move, copy or create a soft link for the FEED_URL file to a directory that you monitor with a script like this. mv or ln is an atomic unix operation so you should be fine. Hack the script to append the new file on your tmp file that you feed once to your consumer program.
By using this way, you use the default feed exporters to write your files. The end-solution is so simple that you don't need a pipeline. A simple Extension should fit the bill.
On an extensions.py in the same directory as settings.py:
from scrapy import signals
from scrapy.exceptions import NotConfigured
class MoveFileOnCloseExtension(object):
def __init__(self, feed_uri):
self.feed_uri = feed_uri
#classmethod
def from_crawler(cls, crawler):
# instantiate the extension object
feed_uri = crawler.settings.get('FEED_URI')
ext = cls(feed_uri)
crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)
# return the extension object
return ext
def spider_closed(self, spider):
# Move the file to the proper location
# os.rename(self.feed_uri, ... destination path...)
On your settings.py:
EXTENSIONS = {
'myproject.extensions.MoveFileOnCloseExtension': 500,
}