When to use api.one and api.multi in odoo | openerp? - odoo

Recently odoo (formerly OpenERP) V8 has been released. In new API method decorators are introduced. in models.py methods needs to be decorated with #api.one or #api.multi.
Referring odoo documentation i can not determine the exact use. Can anybody explain in detail.
Thanks.

Generally both decoarators are used to decorate a record-style method where 'self' contains recordset(s). Let me explain in brief when to use #api.one and #api.multi:
1. #api.one:
Decorate a record-style method where 'self' is expected to be a singleton instance.
The decorated method automatically loops on records (i.e, for each record in recordset it calls the method), and makes a list with the results.
In case the method is decorated with #returns, it concatenates the resulting instances. Such a method:
#api.one
def method(self, args):
return self.name
may be called in both record and traditional styles, like::
# recs = model.browse(cr, uid, ids, context)
names = recs.method(args)
names = model.method(cr, uid, ids, args, context=context)
Each time 'self' is redefined as current record.
2. #api.multi:
Decorate a record-style method where 'self' is a recordset. The method typically defines an operation on records. Such a method:
#api.multi
def method(self, args):
may be called in both record and traditional styles, like::
# recs = model.browse(cr, uid, ids, context)
recs.method(args)
model.method(cr, uid, ids, args, context=context)
When to use:
If you are using #api.one, the returned value is in a list.
This is not always supported by the web client, e.g. on button action
methods.
In that case, you should use #api.multi to decorate your method, and probably call self.ensure_one() in
the method definition.
It is always better use #api.multi with self.ensure_one() instead of #api.one to avoid the side effect in return values.

#api.one:
This decorator loops automatically on Records of RecordSet for you. Self is redefined as current record:
#api.one
def func(self):
self.name = 'xyz'
#api.multi:
Self will be the current RecordSet without iteration. It is the default behavior:
#api.multi
def func(self):
len(self)
For the detailed description of all API you can refer this Link

#api.model #When the record data/self is not as relevant. Sometimes also used with old API calls.
def model_text(self):
return "this text does not rely on self"
#api.multi #Normally followed by a loop on self because self may contain multiple records
def set_field(self):
for r in self:
r.abc = r.a + r.b
#api.one #The api will do a loop and call the method for each record. Not preferred because of potential problems with returns to web clients
def set_field(self):
self.abc = self.a + self.b

Related

In Odoo, how do I override _file_write in the ir_attachment class?

In the Odoo 9 source code the class ir_attachment has the following comment:
The 'data' function field (_data_get,data_set) is implemented using
_file_read, _file_write and _file_delete which can be overridden to
implement other storage engines, such methods should check for other
location pseudo uri (example: hdfs://hadoppserver)
It tells me I can over ride the read, write and delete methods, but I have not been able to find any documentation on how to do so.
I tried overriding like I would other Odoo modules by creating an module with this code:
class Attachments(osv.osv):
_inherit = 'ir.attachment'
def _file_read(self, cr, uid, fname, bin_size=False):
r = super(Attachments, self)._file_read(cr, uid, fname, bin_size)
return r
def _file_write(self, cr, uid, value, checksum):
name = super(Attachments, self)._filewrite(self, cr, uid, value, checksum):
return fname
However, I set several breakpoints and it appears Odoo is not registering these function overrides. Is there a different way to override methods in runtime directory?
See this github project for a complete and working example: https://github.com/tvanesse/odoo-s3

Odoo 8 method resolution order

i have problems with understanding the inheritance in odoo.
Consider following code in module 1
`class pos_order(models.Model):
_inherit = 'pos.order'
def create_from_ui(self, cr, uid, orders, context=None):
super(models.Model, self).create_from_ui(cr, uid, orders, context=context)
print "1"`
and same in module 2, only it prints 2. First module 1 is installed then module 2. As you see in both the pos_order is extended with custom create_from_ui function. If create_from_ui is called now module2 order is called which in turn calls module1 order which in turn calls original. How could I call only the original now (lets say i dont want "1" printed under certain circumstances)?
Cheers and big thanks for all the help
Odoo sets up the hierarchy, but then the normal Python rules apply.
If you want to call the original method from module 2, you can import that specific class from the original module, being careful to pass self to it, as you're calling the method from the class, not an instance:
from openerp.addons.point_of_sale.point_of_sale import pos_order as original_pos_order
class pos_order(models.Model):
_inherit = 'pos.order'
def create_from_ui(self, cr, uid, orders, context=None):
original_pos_order.create_from_ui(self, cr, uid, orders, context=context)
print "1"`

Where to write new method of python. Either in existing .py files or I need to create a new one?

I am going to write a method in point of sale containing existing .py files. Should I create new python file ? or write new method in existing .py files??
If you need to add a new method to a particular model (e.g. sale.order), then inherit that particular model and add your method in a separate module i.e. custom module.
class SaleOrder(models.Model):
_inherit='sale.order'
#api.multi
def custom_test_method(self...)
Note:
This is in order to migrate to new version or update your code from github. Mostly, any modification to your model needs to be done in a custom module only.
Never change the code in the base module or the module's not written by you. Because when transition to update the latest code in order to get new functionalities or migration to another version, there is a plenty of chances for code loss and results to weird behaviour.
Use custom module for new method or overwriting the existing method
Eg: To add new method in pos module's, model "pos.order":
class pos_order(orm.Model):
_inherit = "pos.order"
def your_new_method(self, cr, uid, ids, args, context=None):
## your code
return
For existing method:
class pos_order(orm.Model):
_inherit = "pos.order"
def your_existing_method(self, cr, uid, ids, args, context=None):
res = super(pos_order, self).your_existing_method(cr, uid, ids, args, context=context)
## your code to change the existing method result
return res

Scrapy pipeline architecture - need to return variables

I need some advice on how to proceed with my item pipeline. I need to POST an item to an API (working well) and with the response object get the ID of the entity created (have this working too) and then use it to populate another entity. Ideally, the item pipeline can return the entity ID. Basically, I am in a situation where I have a one to many relationship that I need to encode in a no-SQL database. What would be the best way to proceed?
The best way to proceed for you is to use Mongodb, a NO-sql databse which runs best in compliance with scrapy. The pipeline for the mongodb can be found here and the the process is explained in this tutorial .
Now what is explained in the solution from Pablo Hoffman, updating different items from different pipelines into one can be achieved by the following decorator on the process_item method of a Pipeline object so that it checks the pipeline attribute of your spider for whether or not it should be executed. (Not tested the code but hope it would help)
def check_spider_pipeline(process_item_method):
#functools.wraps(process_item_method)
def wrapper(self, item, spider):
# message template for debugging
msg = '%%s %s pipeline step' % (self.__class__.__name__,)
# if class is in the spider's pipeline, then use the
# process_item method normally.
if self.__class__ in spider.pipeline:
spider.log(msg % 'executing', level=log.DEBUG)
return process_item_method(self, item, spider)
# otherwise, just return the untouched item (skip this step in
# the pipeline)
else:
spider.log(msg % 'skipping', level=log.DEBUG)
return item
return wrapper
And the decorator goes something like this :
class MySpider(BaseSpider):
pipeline = set([
pipelines.Save,
pipelines.Validate,
])
def parse(self, response):
# insert scrapy goodness here
return item
class Save(BasePipeline):
#check_spider_pipeline
def process_item(self, item, spider):
# more scrapy goodness here
return item
At last you can take help from this question.
Perhaps I don't understand your question, but it sounds like you just need to call your submission code in the def close_spider(self, spider): method. Have you tried that?

Scrapy: Default values for items & fields. What is the best implementation?

As far as I could find out from the documentation and various discussions on the net, the ability to add default values to fields in a scrapy item has been removed.
This doesn't work
category = Field(default='null')
So my question is: what is a good way to initialize fields with a default value?
I already tried to implement it as a item pipeline as suggested here, without any success.
https://groups.google.com/forum/?fromgroups=#!topic/scrapy-users/-v1p5W41VDQ
figured out what the problem was. the pipeline is working (code follows for other people's reference). my problem was, that I am appending values to a field. and I wanted the default method work on one of these listvalues... chose a different way and it works. I am now implementing it with a custom setDefault processor method.
class DefaultItemPipeline(object):
def process_item(self, item, spider):
item.setdefault('amz_VendorsShippingDurationFrom', 'default')
item.setdefault('amz_VendorsShippingDurationTo', 'default')
# ...
return item
Typically, a constructor is used to initialize fields.
class SomeItem(scrapy.Item):
id = scrapy.Field()
category = scrapy.Field()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self['category'] = 'null' # set default value
This may not be a clean solution, but it avoids unnecessary pipelines.