Get the inserted primary key ids using bulk_save_objects - flask-sqlalchemy

How can I get the inserted Ids (primary key generated) after using session.bulk_save_objects?
I tried this:
for x in y:
obj = Post(...)
obj_list.append(obj)
session.bulk_save_objects(obj_list)
session.commit()
for i in obj_list:
print(i.id)
The ids are None. The rows are successfully inserted.

you need to add return_defaults = True in bulk_save_object method like below to get primary key of records
for x in y:
obj = Post(...)
obj_list.append(obj)
session.bulk_save_objects(obj_list,return_defaults = True)
session.commit()
for i in obj_list:
print(i.id)

A more performant way to do this rather than using bulk_save_objects which emits an insert statement per value is to use sqlalchemy.dialects.postgresql.insert with a returning clause.
insert_stmt = insert(Table).values(dictionary_mappings).returning(Table.id)
ids = db.session.execute(insert_stmt).fetchall()

Related

IndexedDB composite index partial match

I can't find an answer to this anywhere.
I have an indexeddb composite index of a group id and a time, which I use to sort.
let tmp_CREATEDTIMEindex = texts.index('GROUP_ID, CREATEDTIME');
This works great, except I need to result to reflect only the group id, not the time. How do I get a result from a match on just the group id?
To clarify, this returns one record:
let request = tmp_CREATEDTIMEindex.getAll(['someid', 'August, 25 2022 06:52:02']);
I need it to return all records.
let request = tmp_CREATEDTIMEindex.getAll(['someid', '*']);
You can use a key range:
let range = IDBKeyRange.bound(['someid'], ['someid\x00'], true, true);
let request = tmp_CREATEDTIMEindex.getAll(range);
['someid'] sorts before any other composite key starting with 'someid'
['someid\x00'] sorts after any other composite key starting with 'someid'
the true, true arguments exclude those keys specifically from the results

Find a records containg a specific tag

I have a following DB schema:
User
- id
- name (String)
UserTag
- user_id
- tag_id
Tag
- id
- key (String)
And i also have a pretty complex users-search chain with a lot of where statements. I'm trying to figure out how to include one another where condition to my chain - filtering for users that has a specific tag assigned (ID of this tag is unknown, just it's key is known).
So, here is more or less how my code looks:
col = User.all
col = col.where('cats_count <= 0') if args[:no_cat]
col = col.where('dogs_count <= 0') if args[:no_dog]
col = col.where('other_pets_count <= 0') if args[:no_other_pet]
# ... tag logic filtering here ...
col = col.where('age > 100') if args[:old]
And i want to filter for a users, who has a Tag with key=non_smoking assigned. Ideally, i would love it to be database-engine independen, if that's important - i'm on sqlite/postgres.
Honestly i have completely no any ideas on how to deal with that, i probably lack some knowledge in SQL matter and then, in Rails/ActiveRecord.
You can specify joins and the put conditions on the joined tables. So maybe something like (untested, off the top of my head)
col.joins(user_tags: :tag).where(user_tags: { tag: { key: 'non_smoking' } })

Check count before insert data odoo 9

How in custom module odoo 9 before insert new record in database, check is record exists?
For example:
If in table project.task we have name "PARIS" and date_deadline "2017-01-01" in code below I need warning before execute...
vals = {'name': 'PARIS','date_deadline': '2017-01-01']}
create_new_task = http.request.env['project.task'].create(vals)
Or maybe try get count from project.task where name='PARIS' and date_deadline='2017-01-01' before click on button save!
Any simple solution?
#api.one
#api.constrains('date','time') #Time is Many2one
def _check_existing(self):
count = self.env['test.module'].search_count([('date','=',self.date),('time','=',self.time.id)])
print(self.date) #2017-01-01
print(self.time.id) #1
if(count >= 1):
raise ValidationError(_('DUPLICATE!'))
After try insert new record in database where is date = 2017-02-02 and time = 1 get this message:
duplicate key value violates unique constraint "test_module_time_uniq"
DETAIL: Key (time)=(1) already exists.
Time exist but date is different! I need constrains for 2 value! In databse I have only one row where is date = 2017-01-01 and time = 1
if you want to prevent duplicated record use sql_constrains
class ModelClass(models.Model):
_name = 'model.name'
# your fields ...
# add unique constraints
sql_constraints = [
('name_task_uniq', 'UNIQUE (name,date_deadline)', 'all ready got a task with this name and this date')
]
# or use api.constrains if you constrains cannot be don in postgreSQL
#api.constrains('name', 'date_deadline')
def _check_existing(self):
# add you logic to check you constrain
duplicate key value violates unique constraint "test_module_time_uniq"
DETAIL: Key (time)=(1) already exists.
This error is thrown by the postgres not odoo you all ready have a unique constrain for time only you need to drop it first
NB: Who added the constrains unique is it you when you where testing you code or is someone else? Answer this by comment ^^
ALTER TABLE project_task DROP CONSTRAINT test_module_time_uniq;
You can use something along those lines:
#api.constrains('name', 'date_deadline')
def _check_existing(self):
# Search for entry with these params, and if found:
# raise ValidationError('Item already exists')

sqlalchemy: paginate does not return the expected number of elements

I am using flask-sqlalchemy together with a sqlite database. I try to get all votes below date1
sub_query = models.VoteList.query.filter(models.VoteList.vote_datetime < date1)
sub_query = sub_query.filter(models.VoteList.group_id == selected_group.id)
sub_query = sub_query.filter(models.VoteList.user_id == g.user.id)
sub_query = sub_query.subquery()
old_votes = models.Papers.query.join(sub_query, sub_query.c.arxiv_id == models.Papers.arxiv_id).paginate(1, 4, False)
where the database model for VoteList looks like this
class VoteList(db.Model):
id = db.Column(db.Integer, primary_key=True)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
group_id = db.Column(db.Integer, db.ForeignKey('groups.id'))
arxiv_id = db.Column(db.String(1000), db.ForeignKey('papers.arxiv_id'))
vote_datetime = db.Column(db.DateTime)
group = db.relationship("Groups", backref=db.backref('vote_list', lazy='dynamic'))
user = db.relationship("User", backref=db.backref('votes', lazy='dynamic'), foreign_keys=[user_id])
def __repr__(self):
return '<VoteList %r>' % (self.id)
I made sure that the 'old_votes' selection above has 20 elements. If I use .all() instead of .paginate() I get the expected 20 result?
Since I used a max results value of 4 in the example above I would expect that old_votes.items has 4 elements. But it has only 2? If I increase the max results value the number of elements also increases, but it is always below the max result value? Paginate seems to mess up something here?
any ideas?
thanks
carl
EDIT
I noticed that it works fine if I apply the paginate() function on add_columns(). So if I add (for no good reason) a column with
old_votes = models.Papers.query.join(sub_query, sub_query.c.arxiv_id == models.Papers.arxiv_id)
old_votes = old_votes.add_columns(sub_query.c.vote_datetime).paginate(page, VOTES_PER_PAGE, False)
it works fine? But since I don't need that column it would still be interesting to know what goes wrong with my example above?
Looks to me that for the 4 rows returned (and filtered) by the query, there are 4 rows representing 4 different rows of the VoteList table, but they refer/link/belong to only 2 different Papers models. When model instances are created, duplicates are filtered out, and therefore you get less rows. When you add a column from a subquery, the results are tuples of (Papers, vote_datetime), and in this case no duplicates are removed.
I encountered the same issue and I applied van's answer but it did not work. However I agree with van's explanation so I added .distinct() to the query like this:
old_votes = models.Papers.query.distinct().join(sub_query, sub_query.c.arxiv_id == models.Papers.arxiv_id).paginate(1, 4, False)
It worked as I expected.

Explain Keyed Tuple output from SQLAlchemy ORM Query using Aliased

Please help me improve/understand queries using an aliased class. Consider an example with movement between two locations described as follows.
class Location(Base):
__tablename__ = 'location'
id = Column(Integer, primary_key = True)
class Movement(Base):
__tablename__ = 'movement'
id = Column(Integer, primary_key = True)
from_id = Column(None, ForeignKey('location.id')
to_id = Column(None, ForeignKey('location.id')
from_location = relationship('Location', foreign_keys = from_id)
to_location = relationship('Location', foreign_keys = to_id)
To join three tables in a query, I'm using the aliased() function from sqlalchemy.orm:
FromLocation = aliased(Location)
ToLocation = aliased(Location)
r = session.query(Movement, FromLocation, ToLocation).\
join(FromLocation, Movement.from_id == FromLocation.id).\
join(ToLocation, Movement.to_id == ToLocation.id).first()
First question is "What's the intelligent way to work with r?" The query returns a keyed tuple, but the only key is 'Movement', there's no 'FromLocation' as I would expect. I can get it with r[1], but that's easily broken.
Second question is "Did I put in the relationship right?" I didn't think I would have to specify the join target so explicitly. But without the targets specified, I get an error:
r = session.query(Movement, FromLocation, ToLocation).\
join(FromLocation).\
join(ToLocation)
InvalidRequestError: Could not find a FROM clause to join from. Tried joining to <AliasedClass at 0x10cfa16d8; Location>, but got: Can't determine join between 'movement' and '%(4512717680 location)s'; tables have more than one foreign key constraint relationship between them. Please specify the 'onclause' of this join explicitly.
Yes, I see the two foreign keys, but how to map them correctly?
Option-1: To have names in the KeyedTuple, just add names to the aliases:
FromLocation = aliased(Location, name="From")
ToLocation = aliased(Location, name="To")
# ...
print(r.keys)
# >>>> ['Movement', 'From', 'To']
Option-2: Create a query to return only Movement instance(s), but preload both locations. Please note also alternative join syntax by specifying relationship instead of key pairs.
r = (session.query(Movement)
.join(FromLocation, Movement.from_location)
.join(ToLocation, Movement.to_location)
.options(contains_eager(Movement.from_location, alias=FromLocation))
.options(contains_eager(Movement.to_location, alias=ToLocation))
).first()
print(r)
print(r.from_location)
print(r.to_location)