Optimizing code for better performance and quality - odoo

I have this calculation method that calculates 6 fields and total.
It works.
Question is how can I optimize it performance wise and code quality wise.
Just want to get some suggestions on how to write better code.
def _ocnhange_date(self):
date = datetime.datetime.now().strftime ("%Y-%m-%d %H:%M:%S")
self.date = date
self.drawer_potential = self.drawer_id.product_categ_price * self.drawer_qty
self.flexible_potential = self.flexible_id.product_categ_price * self.flexible_qty
self.runner_potential = self.runner_id.product_categ_price * self.runner_qty
self.group_1_potential = self.group_1_id.product_categ_price * self.group_1_qty
self.group_2_potential = self.group_2_id.product_categ_price * self.group_2_qty
self.group_3_potential = self.group_3_id.product_categ_price * self.group_3_qty
total = [self.drawer_potential,self.flexible_potential,self.runner_potential,self.group_1_potential,
self.group_2_potential,self.group_3_potential]
self.total_potentail = sum(total)

First things first: you should worry about performance mostly on batch operations. Your case is an onchange method, which means:
it will be triggered manually by user interaction.
it will only affect a single record at a time.
it will not perform database writes.
So, basically, this one will not be a critical bottleneck in your module.
However, you're asking how that could get better, so here it goes. It's just an idea, in some points just different (not better), but this way you can maybe see a different approach in some place that you prefer:
def _ocnhange_date(self):
# Use this alternative method, easier to maintain
self.date = fields.Datetime.now()
# Less code here, although almost equal
# performance (possibly less)
total = 0
for field in ("drawer", "flexible", "runner",
"group_1", "group_2", "group_3"):
potential = self["%s_id"].product_categ_price * self["%s_qty"]
total += potential
self["%s_potential"] = potential
# We use a generator instead of a list
self.total_potential = total

I only see two things you can improve here:
Use Odoo's Datetime class to get "now" because it already takes Odoo's datetime format into consideration. In the end that's more maintainable, because if Odoo decides to change the whole format system wide, you have to change your method, too.
Try to avoid so many assignments and instead use methods which allow a combined update of some values. For onchange methods this would be update() and for other value changes it's obviously write().
def _onchange_date(self):
self.update({
'date': fields.Datetime.now(),
'drawer_potential': self.drawer_id.product_categ_price * self.drawer_qty,
'flexible_potential': self.flexible_id.product_categ_price * self.flexible_qty,
# and so on
})

Related

How to implement a time based length queue in F#?

This is a followup to question: How to optimize this moving average calculation, in F#
To summarize the original question: I need to make a moving average of a set of data I collect; each data point has a timestamp and I need to process data up to a certain timestamp.
This means that I have a list of variable size to average.
The original question has the implementation as a queue where elements gets added and eventually removed as they get too old.
But, in the end, iterating through a queue to make the average is slow.
Originally the bulk of the CPU time was spent finding the data to average, but then once this problem was removed by only keeping the data needed in the first place, the Seq.average call proved to be very slow.
It looks like the original mechanism (based on Queue<>) is not appropriate and this question is about finding a new one.
I can think of two solutions:
implement this as a circular buffer which is large enough to accommodate the worst case scenario, this would allow to use an array and do only two iterations to make the sum.
quantize the data in buckets and pre-sum it, but I'm not sure if the extra complexity will help performance.
Is there any implementation of a circular buffer that would behave similarly to a Queue<>?
The fastest code, so far, is:
module PriceMovingAverage =
// moving average queue
let private timeQueue = Queue<DateTime>()
let private priceQueue = Queue<float>()
// update moving average
let updateMovingAverage (tradeData: TradeData) priceBasePeriod =
// add the new price
timeQueue.Enqueue(tradeData.Timestamp)
priceQueue.Enqueue(float tradeData.Price)
// remove the items older than the price base period
let removeOlderThan = tradeData.Timestamp - priceBasePeriod
let rec dequeueLoop () =
let p = timeQueue.Peek()
if p < removeOlderThan then
timeQueue.Dequeue() |> ignore
priceQueue.Dequeue() |> ignore
dequeueLoop()
dequeueLoop()
// get the moving average
let getPrice () =
try
Some (
priceQueue
|> Seq.average <- all CPU time goes here
|> decimal
)
with _ ->
None
Based on a queue length of 10-15k I'd say there's definitely scope to consider batching trades into precomputed blocks of maybe around 100 trades.
Add a few types:
type TradeBlock = {
data: TradeData array
startTime: DateTime
endTime: DateTime
sum: float
count:int
}
type AvgTradeData =
| Trade of TradeData
| Block of TradeBlock
I'd then make the moving average use a DList<AvgTradeData>. (https://fsprojects.github.io/FSharpx.Collections/reference/fsharpx-collections-dlist-1.html) The first element in the DList is summed manually if startTime is after the price period and removed from the list once the price period exceeds the endTime. The last elements in the list are kept as Trade tradeData until 100 are appended and then all removed from the tail and turned into a TradeBlock.

Best way solving optimization with multiple variables in Matlab?

I am trying to compute numerically the solutions for a system of many equations and variables (100+). I tried so far three things:
I now that the vector of p(i) (which contains most of the endogenous variables) is decreasing. Thus I gave simply some starting points, and then was increasing(decreasing) my guess when I saw that the specific p was too low(high). Of course this was always conditional on the other being fixed which is not the case. This should eventually work, but it is neither efficient, nor obvious that I reach a solution in finite time. It worked when reducing the system to 4-6 variables though.
I could create 100+ loops around each other and use bisection for each loop. This would eventually lead me to the solution, but take ages both to program (as I have no idea how to create n loops around each other without actually having to write the loops - which is also bad as I would like to increase/decrease the amount of variables easily) and to execute.
I was trying fminsearch, but as expected for that wast amount of variables - no way!
I would appreciate any ideas... Here is the code (this one the fminsearch I tried):
This is the run file:
clear all
clc
% parameter
z=1.2;
w=20;
lam=0.7;
tau=1;
N=1000;
t_min=1;
t_max=4;
M=6;
a_min=0.6;
a_max=0.8;
t=zeros(1,N);
alp=zeros(1,M);
p=zeros(1,M);
p_min=2;
p_max=1;
for i=1:N
t(i)= t_min + (i-1)*(t_max - t_min)/(N-1);
end
for i=1:M
alp(i)= a_min + (i-1)*(a_max - a_min)/(M-1);
p(i)= p_min + (i-1)*(p_max - p_min)/(M-1);
end
fun=#(p) david(p ,z,w,lam,tau,N,M,t,alp);
p0=p;
fminsearch(fun,p0)
And this is the program-file:
function crit=david(p, z,w,lam,tau,N,M,t,alp)
X = zeros(M,N);
pi = zeros(M,N);
C = zeros(1,N);
Xa=zeros(1,N);
Z=zeros(1,M);
rl=0.01;
rh=1.99;
EXD=140;
while (abs(EXD)>100)
r1=rl + 0.5*(rh-rl);
for i=1:M
for j=1:N
X(i,j)=min(w*(1+lam), (alp(i) * p(i) / r1)^(1/(1-alp(i))) * t(j)^((z-alp(i))/(1-alp(i))));
pi(i,j)=p(i) * t(j)^(z-alp(i)) * X(i,j)^(alp(i)) - r1*X(i,j);
end
end
[C,I] = max(pi);
Xa(1)=X(I(1),1);
for j=2:N
Xa(j)=X(I(j),j);
end
EXD=sum(Xa)- N*w;
if (abs(EXD)>100 && EXD>0)
rl=r1;
elseif (abs(EXD)>100 && EXD<0)
rh=r1;
end
end
Ya=zeros(M,N);
for j=1:N
Ya(I(j),j)=t(j)^(z-alp(I(j))) * X(I(j),j)^(alp(I(j)));
end
Yi=sum(Ya,2);
if (Yi(1)==0)
Z(1)=-50;
end
for j=2:M
if (Yi(j)==0)
Z(j)=-50;
else
Z(j)=(p(1)/p(j))^tau - Yi(j)/Yi(1);
end
end
zz=sum(abs(Z))
crit=(sum(abs(Z)));
First of all my recommendation: use your brain.
What do you know about the function, can you use a gradient approach, linearize the problem, or perhaps fix most of the variables? If not, think twice before you decide that you are really interested in all 100 variables and perhaps simplify the problem.
Now, if that is not possible read this:
If you found a way to quickly get a local optimum, you could simply wrap a loop around it to try different starting points and hope you will find a good optimum.
If you really need to make lots of loops (and a variable amount) I suppose it can be done with recursion, but it is not easily explained.
If you just quickly want to make a fixed number of loops inside each other this can easily be done in excel (hint: loop variables can be called t1,t2 ... )
If you really need to evaluate a function at a lot of points, probably creating all the points first using ndgrid and then evaluating them all at once is preferable. (Needless to say this will not be a nice solution for 100 nontrivial variables)

Implementing Sticky Threads In Django

Note: I'm new to both django and databases, so please excuse my ignorance.
I'm trying to implement a forum in django and wish to have sticky threads. The naive way that I was thinking of do this was to define the Thread model like this:
class Thread(models.Model):
title = models.CharField(max_length=max_title_length)
author = models.ForeignKey(Player, related_name="nonsticky_threads")
post_date = models.DateField()
parent = models.ForeignKey(Subsection, related_name="nonsticky_threads")
closed = models.BooleanField()
sticky = models.BooleanField()
and then to get the sticky threads, do something like this:
sticky_threads = Thread.objects.all().filter(sticky=True)
The problem is that at least theoretically this has O(n) complexity, which sounds bad. (Since sticky threads are always displayed on the first page, this query will be run fairly frequently) However, I don't know how database/django cleverness will affect the final performance or if it will still be bad.
My current alternative is to also create distinct Thread and Sticky_Thread classes:
class Thread(models.Model):
title = models.CharField(max_length=max_title_length)
author = models.ForeignKey(Player, related_name="nonsticky_threads")
post_date = models.DateField()
parent = models.ForeignKey(Subsection, related_name="nonsticky_threads")
closed = models.BooleanField()
class Sticky_Thread(models.Model):
title = models.CharField(max_length=max_title_length)
author = models.ForeignKey(Player, related_name="sticky_threads")
post_date = models.DateField()
parent = models.ForeignKey(Subsection, related_name="sticky_threads")
closed = models.BooleanField()
letting me grab the sticky threads in O(1) time no matter what. What I don't like about this approach is that now if I want to just get all of a player's threads, I have to implement a special threads property like this:
class Player(models.Model):
[snip]
#property
def threads(self):
return self.sticky_threads | self.nonsticky_threads
and this approach feels ugly.
Is there an obviously best way to imeplement something like this? Do I just need to do timings to see if the naive way is acceptable? (I'm implementing this as a learning exercise, so I don't really have hard limits, making this check a little difficult) (If so, how would you recommend I do that? (IS something like timeit the bast way?) Is there a better alternative?
Thanks!
Your analysis of the complexity of those two operations is way off. It's simply not true to classify the filter operation as O(n) and the two separate classes as O(1) - I don't know what you're using to make that distinction. Databases are highly optimized for selecting on individual criteria: an index on the sticky column will make the filter query almost exactly the same as querying for everything from a separate table.
The first way is without question the right way to go about this, as long as you ensure that your sticky column is indexed.

Why is iterating through a large Django QuerySet consuming massive amounts of memory?

The table in question contains roughly ten million rows.
for event in Event.objects.all():
print event
This causes memory usage to increase steadily to 4 GB or so, at which point the rows print rapidly. The lengthy delay before the first row printed surprised me – I expected it to print almost instantly.
I also tried Event.objects.iterator() which behaved the same way.
I don't understand what Django is loading into memory or why it is doing this. I expected Django to iterate through the results at the database level, which'd mean the results would be printed at roughly a constant rate (rather than all at once after a lengthy wait).
What have I misunderstood?
(I don't know whether it's relevant, but I'm using PostgreSQL.)
Nate C was close, but not quite.
From the docs:
You can evaluate a QuerySet in the following ways:
Iteration. A QuerySet is iterable, and it executes its database query the first time you iterate over it. For example, this will print the headline of all entries in the database:
for e in Entry.objects.all():
print e.headline
So your ten million rows are retrieved, all at once, when you first enter that loop and get the iterating form of the queryset. The wait you experience is Django loading the database rows and creating objects for each one, before returning something you can actually iterate over. Then you have everything in memory, and the results come spilling out.
From my reading of the docs, iterator() does nothing more than bypass QuerySet's internal caching mechanisms. I think it might make sense for it to a do a one-by-one thing, but that would conversely require ten-million individual hits on your database. Maybe not all that desirable.
Iterating over large datasets efficiently is something we still haven't gotten quite right, but there are some snippets out there you might find useful for your purposes:
Memory Efficient Django QuerySet iterator
batch querysets
QuerySet Foreach
Might not be the faster or most efficient, but as a ready-made solution why not use django core's Paginator and Page objects documented here:
https://docs.djangoproject.com/en/dev/topics/pagination/
Something like this:
from django.core.paginator import Paginator
from djangoapp.models import model
paginator = Paginator(model.objects.all(), 1000) # chunks of 1000, you can
# change this to desired chunk size
for page in range(1, paginator.num_pages + 1):
for row in paginator.page(page).object_list:
# here you can do whatever you want with the row
print "done processing page %s" % page
Django's default behavior is to cache the whole result of the QuerySet when it evaluates the query. You can use the QuerySet's iterator method to avoid this caching:
for event in Event.objects.all().iterator():
print event
https://docs.djangoproject.com/en/stable/ref/models/querysets/#iterator
The iterator() method evaluates the queryset and then reads the results directly without doing caching at the QuerySet level. This method results in better performance and a significant reduction in memory when iterating over a large number of objects that you only need to access once. Note that caching is still done at the database level.
Using iterator() reduces memory usage for me, but it is still higher than I expected. Using the paginator approach suggested by mpaf uses much less memory, but is 2-3x slower for my test case.
from django.core.paginator import Paginator
def chunked_iterator(queryset, chunk_size=10000):
paginator = Paginator(queryset, chunk_size)
for page in range(1, paginator.num_pages + 1):
for obj in paginator.page(page).object_list:
yield obj
for event in chunked_iterator(Event.objects.all()):
print event
For large amounts of records, a database cursor performs even better. You do need raw SQL in Django, the Django-cursor is something different than a SQL cursur.
The LIMIT - OFFSET method suggested by Nate C might be good enough for your situation. For large amounts of data it is slower than a cursor because it has to run the same query over and over again and has to jump over more and more results.
Django doesn't have good solution for fetching large items from database.
import gc
# Get the events in reverse order
eids = Event.objects.order_by("-id").values_list("id", flat=True)
for index, eid in enumerate(eids):
event = Event.object.get(id=eid)
# do necessary work with event
if index % 100 == 0:
gc.collect()
print("completed 100 items")
values_list can be used to fetch all the ids in the databases and then fetch each object separately. Over a time large objects will be created in memory and won't be garbage collected til for loop is exited. Above code does manual garbage collection after every 100th item is consumed.
This is from the docs:
http://docs.djangoproject.com/en/dev/ref/models/querysets/
No database activity actually occurs until you do something to evaluate the queryset.
So when the print event is run the query fires (which is a full table scan according to your command.) and loads the results. Your asking for all the objects and there is no way to get the first object without getting all of them.
But if you do something like:
Event.objects.all()[300:900]
http://docs.djangoproject.com/en/dev/topics/db/queries/#limiting-querysets
Then it will add offsets and limits to the sql internally.
Massive amount of memory gets consumed before the queryset can be iterated because all database rows for a whole query get processed into objects at once and it can be a lot of processing depending on a number of rows.
You can chunk up your queryset into smaller digestible bits. I call the pattern to do this "spoonfeeding". Here's an implementation with a progress-bar I use in my management commands, first pip3 install tqdm
from tqdm import tqdm
def spoonfeed(qs, func, chunk=1000, start=0):
"""
Chunk up a large queryset and run func on each item.
Works with automatic primary key fields.
chunk -- how many objects to take on at once
start -- PK to start from
>>> spoonfeed(Spam.objects.all(), nom_nom)
"""
end = qs.order_by('pk').last()
progressbar = tqdm(total=qs.count())
if not end:
return
while start < end.pk:
for o in qs.filter(pk__gt=start, pk__lte=start+chunk):
func(o)
progressbar.update(1)
start += chunk
progressbar.close()
To use this you write a function that does operations on your object:
def set_population(town):
town.population = calculate_population(...)
town.save()
and than run that function on your queryset:
spoonfeed(Town.objects.all(), set_population)
Here a solution including len and count:
class GeneratorWithLen(object):
"""
Generator that includes len and count for given queryset
"""
def __init__(self, generator, length):
self.generator = generator
self.length = length
def __len__(self):
return self.length
def __iter__(self):
return self.generator
def __getitem__(self, item):
return self.generator.__getitem__(item)
def next(self):
return next(self.generator)
def count(self):
return self.__len__()
def batch(queryset, batch_size=1024):
"""
returns a generator that does not cache results on the QuerySet
Aimed to use with expected HUGE/ENORMOUS data sets, no caching, no memory used more than batch_size
:param batch_size: Size for the maximum chunk of data in memory
:return: generator
"""
total = queryset.count()
def batch_qs(_qs, _batch_size=batch_size):
"""
Returns a (start, end, total, queryset) tuple for each batch in the given
queryset.
"""
for start in range(0, total, _batch_size):
end = min(start + _batch_size, total)
yield (start, end, total, _qs[start:end])
def generate_items():
queryset.order_by() # Clearing... ordering by id if PK autoincremental
for start, end, total, qs in batch_qs(queryset):
for item in qs:
yield item
return GeneratorWithLen(generate_items(), total)
Usage:
events = batch(Event.objects.all())
len(events) == events.count()
for event in events:
# Do something with the Event
There are a lot of outdated results here. Not sure when it was added, but Django's QuerySet.iterator() method uses a server-side cursor with a chunk size, to stream results from the database. So if you're using postgres, this should now be handled out of the box for you.
I usually use raw MySQL raw query instead of Django ORM for this kind of task.
MySQL supports streaming mode so we can loop through all records safely and fast without out of memory error.
import MySQLdb
db_config = {} # config your db here
connection = MySQLdb.connect(
host=db_config['HOST'], user=db_config['USER'],
port=int(db_config['PORT']), passwd=db_config['PASSWORD'], db=db_config['NAME'])
cursor = MySQLdb.cursors.SSCursor(connection) # SSCursor for streaming mode
cursor.execute("SELECT * FROM event")
while True:
record = cursor.fetchone()
if record is None:
break
# Do something with record here
cursor.close()
connection.close()
Ref:
Retrieving million of rows from MySQL
How does MySQL result set streaming perform vs fetching the whole JDBC ResultSet at once

Spaced repetition (SRS) for learning

A client has asked me to add a simple spaced repeition algorithm (SRS) for an onlinebased learning site. But before throwing my self into it, I'd like to discuss it with the community.
Basically the site asks the user a bunch of questions (by automatically selecting say 10 out of 100 total questions from a database), and the user gives either a correct or incorrect answer. The users result are then stored in a database, for instance:
userid questionid correctlyanswered dateanswered
1 123 0 (no) 2010-01-01 10:00
1 124 1 (yes) 2010-01-01 11:00
1 125 1 (yes) 2010-01-01 12:00
Now, to maximize a users ability to learn all answers, I should be able to apply an SRS algorithm so that a user, next time he takes the quiz, receives questions incorrectly answered more often; than questions answered correctly. Also, questions that are previously answered incorrectly, but recently often answered correctly should occur less often.
Have anyone implemented something like this before? Any tips or suggestions?
Theese are the best links I've found:
http://en.wikipedia.org/wiki/Spaced_repetition
http://www.mnemosyne-proj.org/principles.php
http://www.supermemo.com/english/ol/sm2.htm
What you want to do is to have a number X_i for all questions i. You can normalize these numbers (make their sum 1) and pick one at random with the corresponding probability.
If N is the number of different questions and M is the number of times each question has been answered in average, then you could find X in M*N time like this:
Create the array X[N] set to 0.
Run through the data, and every time you see question i answered wrong, increase N[i] by f(t) where t is the answering time and f is an increasing function.
Because f is increasing, a question answered wrong a long time ago has less impact than one answered wrong yesterday. You can experiment with different f to get a nice behaviour.
The smarter way
A faster way is not to generate X[] every time you choose questions, but save it in a database table.
You won't be able to apply f with this solution. Instead just add 1 every time the question is answered wrongly, and then run through the table regularly - say every midnight - and multiply all X[i] by a constant - say 0.9.
Update: Actually you should base your data on corrects, not wrongs. Otherwise, questions not answered neither true nor false for a long time, will have a smaller chance of getting chosen. It should be opposite.
Anki is an open source program implementing spaced repetition.
Being open source, you can browse the source for libanki, a spaced repetition library for Anki.
As of Januray 2013, Anki version 2 sources can be browsed here.
The sources are in Python, the executable pseudo code language.
Reading the source to understand the algorithm may be feasible. The data model is defined using sqlalechmey, the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL.
Here is a spaced repetition algorithm that is well documented and easy to understand.
Features
Introduces sub-decks for efficiently learning large decks (Super
useful!)
Intuitive variable names and algorithm parameters. Fully
open-source with human-readable examples.
Easily configurable
parameters to accommodate for different users' memorization
abilities.
Computationally cheap to compute next card. No need to
run a computation on every card in the deck.
https://github.com/Jakobovski/SaneMemo.
Disclaimer: I am the author of SaneMemo.
import random
import datetime
# The number of times needed for the user to get the card correct(EASY) consecutively before removing the card from
# the current sub_deck.
CONSECUTIVE_CORRECT_TO_REMOVE_FROM_SUBDECK_WHEN_KNOWN = 2
CONSECUTIVE_CORRECT_TO_REMOVE_FROM_SUBDECK_WHEN_WILL_FORGET = 3
# The number of cards in the sub-deck
SUBDECK_SIZE = 15
REMINDER_RATE = 1.6
class Deck(object):
def __init__(self):
self.cards = []
# Used to make sure we don't display the same card twice
self.last_card = None
def add_card(self, card):
self.cards.append(card)
def get_next_card(self):
self.cards = sorted(self.cards) # Sorted by next_practice_time
sub_deck = self.cards[0:min(SUBDECK_SIZE, len(self.cards))]
card = random.choice(sub_deck)
# In case card == last card lets select again. We don't want to show the same card two times in a row.
while card == self.last_card:
card = random.choice(sub_deck)
self.last_card = card
return card
class Card(object):
def __init__(self, front, back):
self.front = front
self.back = back
self.next_practice_time = datetime.utc.now()
self.consecutive_correct_answer = 0
self.last_time_easy = datetime.utc.now()
def update(self, performance_str):
""" Updates the card after the user has seen it and answered how difficult it was. The user can provide one of
three options: [I_KNOW, KNOW_BUT_WILL_FORGET, DONT_KNOW].
"""
if performance_str == "KNOW_IT":
self.consecutive_correct_answer += 1
if self.consecutive_correct_answer >= CONSECUTIVE_CORRECT_TO_REMOVE_FROM_SUBDECK_WHEN_KNOWN:
days_since_last_easy = (datetime.utc.now() - self.last_time_easy).days
days_to_next_review = (days_since_last_easy + 2) * REMINDER_RATE
self.next_practice_time = datetime.utc.now() + datetime.time(days=days_to_next_review)
self.last_time_easy = datetime.utc.now()
else:
self.next_practice_time = datetime.utc.now()
elif performance_str == "KNOW_BUT_WILL_FORGET":
self.consecutive_correct_answer += 1
if self.consecutive_correct_answer >= CONSECUTIVE_CORRECT_TO_REMOVE_FROM_SUBDECK_WHEN_WILL_FORGET:
self.next_practice_time = datetime.utc.now() + datetime.time(days=1)
else:
self.next_practice_time = datetime.utc.now()
elif performance_str == "DONT_KNOW":
self.consecutive_correct_answer = 0
self.next_practice_time = datetime.utc.now()
def __cmp__(self, other):
"""Comparator or sorting cards by next_practice_time"""
if hasattr(other, 'next_practice_time'):
return self.number.__cmp__(other.next_practice_time)