Django traceback on queries - sql

I want a traceback from every query executed during a request, so I can find where they're coming from and reduce the count/complexity.
I'm using this excellent snippet of middleware to list and time queries, but I don't know where in the they're coming from.
I've poked around in django/db/models/sql/compiler.py but apparent form getting a local version of django and editing that code I can't see how to latch on to queries. Is there a signal I can use? it seems like there isn't a signal on every query.
Is it possible to specify the default Manager?
(I know about django-toolbar, I'm hoping there's a solution without using it.)

An ugly but effective solution (eg. it prints the trace on all queries and only requires one edit) is to add the following to the bottom of settings.py:
import django.db.backends.utils as bakutils
import traceback
bakutils.CursorDebugWrapper_orig = bakutils.CursorWrapper
def print_stack_in_project():
stack = traceback.extract_stack()
for path, lineno, func, line in stack:
if 'lib/python' in path or 'settings.py' in path:
continue
print 'File "%s", line %d, in %s' % (path, lineno, func)
print ' %s' % line
class CursorDebugWrapperLoud(bakutils.CursorDebugWrapper_orig):
def execute(self, sql, params=None):
try:
return super(CursorDebugWrapperLoud, self).execute(sql, params)
finally:
print_stack_in_project()
print sql
print '\n\n\n'
def executemany(self, sql, param_list):
try:
return super(CursorDebugWrapperLoud, self).executemany(sql, param_list)
finally:
print_stack_in_project()
print sql
print '\n\n\n'
bakutils.CursorDebugWrapper = CursorDebugWrapperLoud
Still not sure if there is a more elegant way of doing this?

Django debug toolbar will tell you what you want with spectacular awesomeness.

Related

How to close a QInputDialog with after a defined amount of time

I'm currently working on an application that run in the background and sometime create an input dialog for the user to answer. If the user doesn't interact, I'd like to close the dialog after 30 seconds. I made a QThread that act like a timer and the "finished" signal should close the dialog. I unfortunately cannot find a way to close it.
At this point I'm pretty much lost. I completely new to QThread and a beginner in PyQt5
Here is a simplified version of the code (we are inside a class running a UI):
def Myfunction(self,q):
# q : [q1,q2,q3]
self.popup = counter_thread()
self.popup.start()
self.dial = QInputDialog
self.popup.finished.connect(self.dial.close)
text, ok = self.dial.getText(self, 'Time to compute !', '%s %s %s = ?'%(q[0], q[2], q[1]))
#[...]
I tried ".close()" and others but i got this error message:
TypeError: close(self): first argument of unbound method must have type 'QWidget'
I did it in a separated function but got the same problem...
You cannot close it because the self.dial you created is just an alias (another reference) to a class, not an instance.
Also, getText() is a static function that internally creates the dialog instance, and you have no access to it.
While it is possible to get that dialog through some tricks (installing an event filter on the QApplication), there's no point in complicating things: instead of using the static function, create a full instance of QInputDialog.
def Myfunction(self,q):
# q : [q1,q2,q3]
self.popup = counter_thread()
self.dial = QInputDialog(self) # <- this is an instance!
self.dial.setInputMode(QInputDialog.TextInput)
self.dial.setWindowTitle('Time to compute !')
self.dial.setLabelText('%s %s %s = ?'%(q[0], q[2], q[1]))
self.popup.finished.connect(self.dial.reject)
self.popup.start()
if self.dial.exec():
text = self.dial.textValue()
Note that I started the thread just before showing the dialog, in the rare case it may return immediately, and also because, for the same reason, the signal should be connected before starting it.

Is there a way I can access the attribute in an Attribute Error without parsing the string?

My python version is 3.6
I am trying to give a more helpful message on attribute errors in a CLI framework that I am building. I have the following code
print(cli_config.test_exension_config.input_menu)
Which produces the error AttributeError: 'CLIConfig' object has no attribute 'test_exension_config'
Perfect, however now I want to give a recommendation on the closest attribute match as the attributes are dynamically created from a yaml file.
test_extension:
input_menu: # "InputMenuConfig_instantiation_test"
var:
So the closest attribute match would be test_extension_config.
Below is me catching the error and about to give a recommendation.
def __getattribute__(self, name) -> Any:
try:
return super().__getattribute__(name)
except AttributeError as ae:
# chance to handle the attribute differently
attr = get_erroring_attr(ae)
closest_match = next(get_close_matches(attr, list(vars(self).keys())))
if closest_match: # probably will have some threshold based on 'edit distance'
return closest_match
# if not, re-raise the exception
raise ae
I am wanting to just receive the attribute
I can parse the args of AttributeError but I wanted to know if there was another way to access the actual attribute name that is erroring without parsing the message.
In other words, in the last code block I have a method get_erroring_attr(ae) that takes in the AttributeError.
What would be the cleanest definition of def get_erroring_attr(ae) that will return the erroring attribute?
UPDATE:
So I did this and it works. I would just like to remove parsing as much as possible.
def __getattribute__(self, name) -> Any:
try:
return super().__getattribute__(name)
except AttributeError as ae:
# chance to handle the attribute differently
attr = self.get_erroring_attr(ae)
closest_match = next(match for match in get_close_matches(attr, list(vars(self).keys())))
if closest_match: # probably will have some threshold based on 'edit distance'
traceback.print_exc()
print(CLIColors.build_error_string(f"ERROR: Did you mean {CLIColors.build_value_string(closest_match)}?"))
sys.exit()
# if not, re-raise the exception
raise ae
def get_erroring_attr(self, attr_error: AttributeError):
message = attr_error.args[0]
_, error_attr_name = self.parse_attr_error_message(message)
return error_attr_name
def parse_attr_error_message(self, attr_err_msg: str):
parsed_msg = re.findall("'([^']*)'", attr_err_msg)
return parsed_msg
Which produces

postgres LockError... how to investigate

Hi I am using gunicorn with nginx and a postgreSQL database to run my web app. I recently change my gunicorn command from
gunicorn run:app -w 4 -b 0.0.0.0:8080 --workers=1 --timeout=300
to
gunicorn run:app -w 4 -b 0.0.0.0:8080 --workers=2 --timeout=300
using 2 workers. Now I am getting error messages like
File "/usr/local/lib/python2.7/dist-packages/flask_sqlalchemy/__init__.py", line 194, in session_signal_after_commit
models_committed.send(session.app, changes=list(d.values()))
File "/usr/local/lib/python2.7/dist-packages/blinker/base.py", line 267, in send
for receiver in self.receivers_for(sender)]
File "/usr/local/lib/python2.7/dist-packages/flask_whooshalchemy.py", line 265, in _after_flush
with index.writer() as writer:
File "/usr/local/lib/python2.7/dist-packages/whoosh/index.py", line 464, in writer
return SegmentWriter(self, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/whoosh/writing.py", line 502, in __init__
raise LockError
LockError
I can't really do much with these error messages, but they seem to be linked to whoosh search which I have on the User table in my database model
import sys
if sys.version_info >= (3, 0):
enable_search = False
else:
enable_search = True
import flask.ext.whooshalchemy as whooshalchemy
class User(db.Model):
__searchable__ = ['username','email','position','institute','id'] # these fields will be indexed by whoosh
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(100), index=True)
...
def __repr__(self):
return '<User %r>' % (self.username)
if enable_search:
whooshalchemy.whoosh_index(app, User)
any ideas how to investigate this? I thought postgres allows parallel access and hence I thought lock errors should not happen? When I used only 1 worked they did not happen, so it definitely is caused by having multiple workers...
any help is appreciated
thanks
carl
This has nothing to do with PostgreSQL. Whoosh holds file locks for writing and it's failing on the last line of this code...
class SegmentWriter(IndexWriter):
def __init__(self, ix, poolclass=None, timeout=0.0, delay=0.1, _lk=True,
limitmb=128, docbase=0, codec=None, compound=True, **kwargs):
# Lock the index
self.writelock = None
if _lk:
self.writelock = ix.lock("WRITELOCK")
if not try_for(self.writelock.acquire, timeout=timeout,
delay=delay):
raise LockError
Note, the delay default on this is 0.1 seconds and if it does not get the lock in that time it will fail. You increased your workers so now you have contention on the lock. From the following docs...
https://whoosh.readthedocs.org/en/latest/threads.html
Locking
Only one thread/process can write to an index at a time. When
you open a writer, it locks the index. If you try to open a writer on
the same index in another thread/process, it will raise
whoosh.store.LockError.
In a multi-threaded or multi-process environment your code needs to be
aware that opening a writer may raise this exception if a writer is
already open. Whoosh includes a couple of example implementations
(whoosh.writing.AsyncWriter and whoosh.writing.BufferedWriter) of ways
to work around the write lock.
While the writer is open and during the commit, the index is still
available for reading. Existing readers are unaffected and new readers
can open the current index normally.
You can find examples on how to use Whoosh concurrently.
Buffered
https://whoosh.readthedocs.org/en/latest/api/writing.html#whoosh.writing.BufferedWriter
Async
https://whoosh.readthedocs.org/en/latest/api/writing.html#whoosh.writing.AsyncWriter
I'd try the buffered version first since batching writes is almost always faster.

how to test whether program exits or not

I want to test the next class:
from random import randint
class End(object):
def __init__(self):
self.quips=['You dead', 'You broke everything you can','You turn you head off']
def play(self):
print self.quips[randint(0, len(self.quips)-1)]
exit(1)
I want to test it with nosetests so I could see that the class exits correctly with code 1. I tried differents variants but nosetest returns error like
File "C:\Python27\lib\site.py", line 372, in __call__
raise SystemExit(code)
SystemExit: 1
----------------------------------------------------------------------
Ran 1 test in 5.297s
FAILED (errors=1)
Ofcourse I can assume that it exits but I want for test to return OK status not error. Sorry if my question may be stupid. Im very new to python and I try to test something my very first time.
I would recommend using the assertRaises context manager. Here is an example test that ensures that the play() method exits:
import unittest
import end
class TestEnd(unittest.TestCase):
def testPlayExits(self):
"""Test that the play method exits."""
ender = end.End()
with self.assertRaises(SystemExit) as exitexception:
ender.play()
# Check for the requested exit code.
self.assertEqual(exitexception.code, 1)
As you can see in the traceback, sys.exit()* raises an exception called SystemExit when you call it. So, that's what you want to test for with nose's assert_raises(). If you are writing tests with unittest2.TestCase that's self.assertRaises.
*actually you used plain built-in exit() but you really should use sys.exit() in a program.

Django: Mixed managed and raw db commits - TransactionManagementError

I'm writing a bulk insert script using Django's ORM + custom raw SQL. The code has the following outline:
import sys, os
from django.core.management import setup_environ
from my_project import settings
from my_project.my_app.models import Model1, Model2
setup_environ(settings)
from django.db import transaction
from django.db import connection
#transaction.commit_manually
def process_file(relevant_file):
data_file = open(relevant_file,'r')
cursor = connection.cursor()
while 1:
line = data_file.readline()
if line == '':
break
if not(input_row_i%1000):
transaction.commit()
if ([some rare condition]):
model_1 = Model1([Some assignments based on line])
model_1.save()
values = [Some values based on line]
cursor.execute("INSERT INTO `table_1` ('field_1', 'field_2', 'field_3') VALUES (%i, %f, %s)", values)
data_file.close()
transaction.commit()
I keep getting the following error:
django.db.transaction.TransactionManagementError: Transaction managed block ended with pending COMMIT/ROLLBACK
How can I solve this?
Use transaction.commit_unless_managed()
I've written a post to explain in greater detail with an example.
I started getting this exception in a similar circumstance. The django ORM was actually throwing a django.core.exceptions.ValidationError error because a date was incorrectly formatted. Because I was using manual transaction processing to batch database writes, the Django transactions processing code was trying to cleanup inside the raised django.core.exceptions.ValidationError exception and threw it's own exception of django.db.transaction.TransactionManagementError. Try a try / except around your model_1 code to see if any other exceptions are being thrown. Something like:
try:
model_1 ...
model_1.save()
except:
print "Unexpected error:", sys.exc_info()[0]
print 'line:', line
to see if there are any problems with the input data to object creation code.
You could try a workaround - place a transaction.commit() right after the model_1.save(). I think you need to isolate raw and ORM transactions.