What is self.pool.get() in python and odoo? - odoo

I assume it is used to refer fields in other modules in openerp, but I am not sure.
Here they are somehow getting price from one products module to sales module.
price = self.pool.get('product.pricelist').price_get(cr, uid, [pricelist], product, qty or 1.0, partner_id, ctx)[pricelist]
if price is False:
warn_msg = _("Cannot find a pricelist line matching this product and quantity.\n"
"You have to change either the product, the quantity or the pricelist.")
warning_msgs += _("No valid pricelist line found ! :") + warn_msg +"\n\n"
else:
result.update({'price_unit': price})
if context.get('uom_qty_change', False):
return {'value': {'price_unit': price}, 'domain': {}, 'warning': False}

self.pool.get()
The pool is here just a dictionary-like object that is used to store instances of the OpenERP models, like res.users or ir.model.data.
Multi-database nature of OpenERP / Odoo: a single OpenERP server process can manage multiple databases, and for each database, the set of installed modules, and thus the set of models, can vary.
So, we have different pools, one per database, and the normal way to reference the model instance we are already in, self
Thanks to The Hypered Blog for the great explanation on the self.pool.get().
For more info check that link.

Pool
pool is a registry (dictionary object {}).
The registry is essentially a mapping between model names and model
instances. There is one registry instance per database.
server/openerp/osv/orm.py
class BaseModel(object):
def __init__(self, pool, cr):
""" Initialize a model and make it part of the given registry.
- copy the stored fields' functions in the osv_pool,
- update the _columns with the fields found in ir_model_fields,
- ensure there is a many2one for each _inherits'd parent,
- update the children's _columns,
- give a chance to each field to initialize itself.
"""
pool.add(self._name, self)
self.pool = pool
Refer /openerp/sql_db.py to see how Odoo-Postgresql connection
pool establish.
_Pool = None
def db_connect(to, allow_uri=False):
global _Pool
if _Pool is None:
_Pool = ConnectionPool(int(tools.config['db_maxconn']))
db, uri = dsn(to)
if not allow_uri and db != to:
raise ValueError('URI connections not allowed')
return Connection(_Pool, db, uri)
def close_db(db_name):
""" You might want to call openerp.modules.registry.RegistryManager.delete(db_name) along this function."""
global _Pool
if _Pool:
_Pool.close_all(dsn(db_name)[1])
def close_all():
global _Pool
if _Pool:
_Pool.close_all()
Connection class
class Connection(object):
""" A lightweight instance of a connection to postgres
"""
def __init__(self, pool, dbname, dsn):
self.dbname = dbname
self.dsn = dsn
self.__pool = pool
if you look at the /server/openerp/pooler.py file there you can find the method get_db_and_pool which is used to Create and return a database connection and a newly initialized registry, there are many other methods related to pool look at these.
def get_db_and_pool(db_name, force_demo=False, status=None, update_module=False):
"""Create and return a database connection and a newly initialized registry."""
registry = RegistryManager.get(db_name, force_demo, status, update_module)
return registry.db, registry
def restart_pool(db_name, force_demo=False, status=None, update_module=False):
"""Delete an existing registry and return a database connection and a newly initialized registry."""
registry = RegistryManager.new(db_name, force_demo, status, update_module)
return registry.db, registry
def get_db(db_name):
"""Return a database connection. The corresponding registry is initialized."""
return get_db_and_pool(db_name)[0]
def get_pool(db_name, force_demo=False, status=None, update_module=False):
"""Return a model registry."""
return get_db_and_pool(db_name, force_demo, status, update_module)[1]
And finally you call get method of dictionary to get the value of the specified key, even you can use can use self.pool['model_name'], any method of dictionary can be used with pool.

self.pool.get() is used to get the Singleton instance of the orm model from the registry pool.
If you want to call orm methods for any other model you can use self.pool.get('model').orm_method
And in new API, if you want to call ORM method directly from an object you can use self.env['obj'].method instead of self.method
Hope this helps.

Related

APScheduler and Flask sharing the same object (singleton) issue

Is it possible to share some application data between scheduled jobs? To be more specific, I have one singleton on the application level that is updated when someone POST some data to the flask endpoint, my plan is to create a job that will check (in intervals) that application singleton and perform some operations on it. But the problem is that a singleton object is always an empty object (within the flask context it is always a valid object), somehow it doesn't have the reference to an object defined in the FLASK application (running with a gunicorn multiple workers).
"""SINGLETON OBJECT DEFINED ON APPLICATION LEVEL"""
class SingletonObject(metaclass=Singleton):
singleton_var: Dict[str, Dict[str, any]] = None
def __init__(self):
if not SingletonObject.singleton_var:
SingletonObject.singleton_var = dict()
"""FUNCTION CALLED WITHIN FLASK APP"""
def update_value_on_singleton(key: any, value: any):
SingletonObject.singleton_var[key] = value
"""FUNCTION THAT CHECKS SINGLETON AND IS CALLED WITHIN APSCHEDULER job"""
def check_singleton_object():
"""SingletonObject --always new object, not one from flask context"""
with app.app_context:
for key, value in SingletonObject.singleton_var.items():
print(key, value)
"""STANDARD BACKGROUND SCHEDULER"""
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
scheduler.add_job(func=check_singleton_object, trigger='interval', seconds=5)
scheduler.start()
I tried using flask g (globals), also, before scheduling the job using flask context manager but without success. I would like to avoid using the database, some caching mechanism etc.
Not clear how apscheduler works but I expected that the singleton object is the same in the flask context and apscheduler context

Multiple mongoDB related to same django rest framework project

We are having one django rest framework (DRF) project which should have multiple databases (mongoDB).Each databases should be independed. We are able to connect to one database, but when we are going to another DB for writing connection is happening but data is storing in DB which is first connected.
We changed default DB and everything but no changes.
(Note : Solution should be apt for the usage of serializer. Because we need to use DynamicDocumentSerializer in DRF-mongoengine.
Thanks in advance.
While running connect() just assign an alias for each of your databases and then for each Document specify a db_alias parameter in meta that points to a specific database alias:
settings.py:
from mongoengine import connect
connect(
alias='user-db',
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
connect(
alias='book-db'
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
models.py:
from mongoengine import Document
class User(Document):
name = StringField()
meta = {'db_alias': 'user-db'}
class Book(Document):
name = StringField()
meta = {'db_alias': 'book-db'}
I guess, I finally get what you need.
What you could do is write a really simple middleware that maps your url schema to the database:
from mongoengine import *
class DBSwitchMiddleware:
"""
This middleware is supposed to switch the database depending on request URL.
"""
def __init__(self, get_response):
# list all the mongoengine Documents in your project
import models
self.documents = [item for in dir(models) if isinstance(item, Document)]
def __call__(self, request):
# depending on the URL, switch documents to appropriate database
if request.path.startswith('/main/project1'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db1'
elif request.path.startswith('/main/project2'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db2'
# delegate handling the rest of response to your views
response = get_response(request)
return response
Note that this solution might be prone to race conditions. We're modifying a Documents globally here, so if one request was started and then in the middle of its execution a second request is handled by the same python interpreter, it will overwrite document.cls._meta['db_alias'] setting and first request will start writing to the same database, which will break your database horribly.
Same python interpreter is used by 2 request handlers, if you're using multithreading. So with this solution you can't start your server with multiple threads, only with multiple processes.
To address the threading issues, you can use threading.local(). If you prefer context manager approach, there's also a contextvars module.

gRPC + Thread local issue

Im building a grpc server with python and trying to have some thread local storage handled with werkzeug Local and LocalProxy, similar to what flask does.
The problem I'm facing is that, when I store some data in the local from a server interceptor, and then try to retrieve it from the servicer, the local is empty. The real problem is that for some reason, the interceptor runs in a different greenlet than the servicer, so it's impossible to share data across a request since the werkzeug.local.storage ends up with different keys for the data that is supposed to belong to the same request.
The same happens using python threading library, it looks like the interceptors are run from the main thread or a different thread from the servicers. Is there a workaround for this? I would have expected interceptors to run in the same thread, thus allowing for this sort of things.
# Define a global somewhere
from werkzeug.local import Local
local = Local()
# from an interceptor save something
local.message = "test msg"
# from the service access it
local.service_var = "test"
print local.message # this throw a AttributeError
# print the content of local
print local.__storage__ # we have 2 entries in the storage, 2 different greenlets, but we are in the same request.
the interceptor is indeed run on the serving thread which is different from the handling thread. The serving thread is in charge of serving servicers and intercept servicer handlers. After the servicer method handler is returned by the interceptors, the serving thread will submit it to the thread_pool at _server.py#L525:
# Take unary unary call as an example.
# The method_handler is the returned object from interceptor.
def _handle_unary_unary(rpc_event, state, method_handler, thread_pool):
unary_request = _unary_request(rpc_event, state,
method_handler.request_deserializer)
return thread_pool.submit(_unary_response_in_pool, rpc_event, state,
method_handler.unary_unary, unary_request,
method_handler.request_deserializer,
method_handler.response_serializer)
As for workaround, I can only imagine passing a storage instance both to the interceptor and to servicer during initialization. After that, the storage can be used as a member variable.
class StorageServerInterceptor(grpc.ServerInterceptor):
def __init__(self, storage):
self._storage = storage
def intercept_service(self, continuation, handler_call_details):
key = ...
value = ...
self._storage.set(key, value)
...
return continuation(handler_call_details)
class Storage(...StorageServicer):
def __init__(self, storage):
self._storage = storage
...Servicer Handlers...
You can also wrap all the functions that will be called and set the threading local there, and return a new handler with the wrapped functions.
class MyInterceptor(grpc.ServerInterceptor):
def wrap_handler(self, original_handler: grpc.RpcMethodHandler):
if original_handler.unary_unary is not None:
unary_unary = original_handler.unary_unary
def wrapped_unary_unary(*args, **kwargs):
threading.local().my_var = "hello"
return unary_unary(*args, **kwargs)
new_unary_unary = wrapped_unary_unary
else:
new_unary_unary = None
...
# do this for all the combinations to make new_unary_stream, new_stream_unary, new_stream_stream
new_handler = grpc.RpcMethodHandler()
new_handler.request_streaming=original_handler.request_streaming
new_handler.response_streaming=original_handler.response_streaming
new_handler.request_deserializer=original_handler.request_deserializer
new_handler.response_serializer=original_handler.response_serializer
new_handler.unary_unary=new_unary_unary
new_handler.unary_stream=new_unary_stream
new_handler.stream_unary=new_stream_unary
new_handler.stream_stream=new_stream_stream
return new_handler
def intercept_service(self, continuation, handler_call_details):
return self.wrap_handler(continuation(handler_call_details))

Flask + SQLAlchemy + pytest - not rolling back my session

There are several similar questions on stack overflow, and I apologize in advance if I'm breaking etiquette by asking another one, but I just cannot seem to come up with the proper set of incantations to make this work.
I'm trying to use Flask + Flask-SQLAlchemy and then use pytest to manage the session such that when the function-scoped pytest fixture is torn down, the current transation is rolled back.
Some of the other questions seem to advocate using the db "drop all and create all" pytest fixture at the function scope, but I'm trying to use the joined session, and use rollbacks, since I have a LOT of tests. This would speed it up considerably.
http://alexmic.net/flask-sqlalchemy-pytest/ is where I found the original idea, and Isolating py.test DB sessions in Flask-SQLAlchemy is one of the questions recommending using function-level db re-creation.
I had also seen https://github.com/mitsuhiko/flask-sqlalchemy/pull/249 , but that appears to have been released with flask-sqlalchemy 2.1 (which I am using).
My current (very small, hopefully immediately understandable) repo is here:
https://github.com/hoopes/flask-pytest-example
There are two print statements - the first (in example/__init__.py) should have an Account object, and the second (in test/conftest.py) is where I expect the db to be cleared out after the transaction is rolled back.
If you pip install -r requirements.txt and run py.test -s from the test directory, you should see the two print statements.
I'm about at the end of my rope here - there must be something I'm missing, but for the life of me, I just can't seem to find it.
Help me, SO, you're my only hope!
You might want to give pytest-flask-sqlalchemy-transactions a try. It's a plugin that exposes a db_session fixture that accomplishes what you're looking for: allows you to run database updates that will get rolled back when the test exits. The plugin is based on Alex Michael's blog post, with some additional support for nested transactions that covers a wider array of user cases. There are also some configuration options for mocking out connectibles in your app so you can run arbitrary methods from your codebase, too.
For test_accounts.py, you could do something like this:
from example import db, Account
class TestAccounts(object):
def test_update_view(self, db_session):
test_acct = Account(username='abc')
db_session.add(test_acct)
db_session.commit()
resp = self.client.post('/update',
data={'a':1},
content_type='application/json')
assert resp.status_code == 200
The plugin needs access to your database through a _db fixture, but since you already have a db fixture defined in conftest.py, you can set up database access easily:
#pytest.fixture(scope='session')
def _db(db):
return db
You can find detail on how to setup and installation in the docs. Hope this helps!
I'm also having issues with the rollback, my code can be found here
After reading some documentation, it seems the begin() function should be called on the session.
So in your case I would update the session fixture to this:
#pytest.yield_fixture(scope='function', autouse=True)
def session(db, request):
"""Creates a new database session for a test."""
db.session.begin()
yield db.session
db.session.rollback()
db.session.remove()
I didn't test this code, but when I try it on my code I get the following error:
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "./venv/lib/python2.7/site-packages/_pytest/main.py", line 90, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
...
INTERNALERROR> File "./venv/lib/python2.7/site-packages/_pytest/python.py", line 59, in filter_traceback
INTERNALERROR> return entry.path != cutdir1 and not entry.path.relto(cutdir2)
INTERNALERROR> AttributeError: 'str' object has no attribute 'relto'
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
from unittest import TestCase
# global application scope. create Session class, engine
Session = sessionmaker()
engine = create_engine('postgresql://...')
class SomeTest(TestCase):
def setUp(self):
# connect to the database
self.connection = engine.connect()
# begin a non-ORM transaction
self.trans = self.connection.begin()
# bind an individual Session to the connection
self.session = Session(bind=self.connection)
def test_something(self):
# use the session in tests.
self.session.add(Foo())
self.session.commit()
def tearDown(self):
self.session.close()
# rollback - everything that happened with the
# Session above (including calls to commit())
# is rolled back.
self.trans.rollback()
# return connection to the Engine
self.connection.close()
sqlalchemy doc has solution for the case

Assign new ticket to logged in user

In Trac is possible to set the "owner" for a new ticket to the logged in user ?
I've tried with different value for default_owner in trac.ini but no luck
From trac.edgewall.org authoritative documentation in Trac tickets (wiki):
default_owner: Name of the default owner. If set to the text "< default >" (the default value), the component owner is used.
So this is cannot help you for sure. There are a few more approaches left, depending on you Genshi template knowledge, Python skills etc. Your requirement is not hard to fulfill, but you cannot get it with stock Trac. You'll need to modify the (new) ticket template or add a relatively small plugin (tested with Trac-1.1.1):
import re
from trac.core import Component, implements
from trac.ticket.api import ITicketManipulator
class DefaultTicketOwnerManipulator(Component):
"""Set ticket owner to logged in user, if available."""
implements(ITicketManipulator)
def prepare_ticket(self, req, ticket, fields, actions):
pass
def validate_ticket(self, req, ticket):
if not ticket['owner'] and req.authname:
ticket['owner'] = req.authname
# Optionally report-back manipulation, so require a second POST.
# return [(None, "Owner set to self (%s)" % req.authname)]
return []
Hint: Use commented-out 'return' in pre-last line to not alter owner silently right on save, good for test too.
The following applies to Trac 1.1.3 or later, which is scheduled for release on 1st Jan 2015. It is implemented on the Trac trunk, which we do a pretty good job of keeping stable. The default create workflow actions are:
create = <none> -> new
create.default = 1
create_and_assign = <none> -> assigned
create_and_assign.label = assign
create_and_assign.operations = may_set_owner
create_and_assign.permissions = TICKET_MODIFY
To have the create action assign to the current user, just add:
create.operations = set_owner_to_self
Renaming the action might be appropriate, putting the ticket into either the assigned or accepted state:
create_and_accept = <none> -> accepted
create_and_accept.label = accept
create_and_accept.default = 1
create_and_accept.operations = set_owner_to_self