Django 1.11 - how to print SQL statements with substituted variables - sql

This question is different than log all sql queries
I tried logging configurations from the answers above they are not working as I would like them to work, so please read on.
What I want to do, is to make Django (1.11.x) debug server log SQL queries in such a way, that I can redirect them to *.sql file and immediately execute.
For this I need a SQL statements where all the variables are already substituted, so I DON'T want this:
WHERE some_column in (:arg1, :arg2, ...)
but I want this instead:
WHERE some_column in ('actual_value_1', 'actual_value2', ...)
Can you please help me figure out how to do this?
Please note, that I don't want the SQL query to be printed in the browser (in some debug app like django_debug_toolbar) but printed to the console.
Please note, that I don't want to type Django QuerySet queries in console - I want to type URL in browser, so make an actual HTTP request to Django debug server and see it printing a SQL query in such a way I can execute it later using SQL Plus or any other console tools.

I think the tool I just made would be perfect for you.
https://pypi.python.org/pypi/django-print-sql
https://github.com/rabbit-aaron/django-print-sql
Install like this:
pip install --upgrade django-print-sql
Use like this in your view function:
from django_print_sql import print_sql
Class MyView(View):
def get(self, request):
with print_sql():
User.objects.get(id=request.user.id)
# and more....

This is a bit of a hack, because we have to strip the default django.db.backends log messages of the time taken and the args. After making these changes you should have a file of pure SQL that you are free to run as you wish...
First you want to set up your logging settings to reference our new log handler
Settings:
LOGGING = {
'version': 1,
'handlers': {
'sql': {
'level': 'DEBUG',
'class': 'my_project.loggers.DjangoSQLFileHandler',
'filename': '/path/to/sql.log'
},
},
'loggers': {
'django.db.backends': {
'handlers': ['sql'],
'level': 'DEBUG'
}
}
}
Then you need to define your new handler and strip out the unwanted text from the message.
my_project.loggers.py:
from logging import FileHandler
class DjangoSQLFileHandler(FileHandler):
def getMessage(self):
msg = super(DjangoSQLFileHandler, self).getMessage()
return record.msg.split(') ', 1)[1].split(';', 1)[0] + ';'

Related

How to Impersonate Impala queries on Superset

I'm setting up Superset (0.36.0) in production Mode (with Gunicorn), and I would like to set up impersonate while running Impala queries on my Kerberized Cluster, to each user of Superset have privilegies on tables/databases like he has on Hive/Hue/HDFS. I've tried to set "Impersonate the logged on user" to true in my database config, but it's not changing the user that is running the query, it's always using the celery-worker user.
My database config is:
Extras:
{
"metadata_params": {},
"engine_params": {
"connect_args": {
"port": 21050,
"use_ssl": "True",
"ca_cert": "/path/to/my/cert.pem",
"auth_mechanism": "GSSAPI"
}
},
"metadata_cache_timeout": {},
"schemas_allowed_for_csv_upload": []
}
My query resume in Cloudera Manager (5.13):
How can I enable Impersonate correctly in my Superset? Maybe there is something related to the config impala.doas.user in HiveServer2 connection, but I don't know how to config this properly.
I faced the same issue and I was to get it working for hive. The issue seems to be in the file hive.py located under the path ${YOUR_INSTALLATION_PATH}/superset/db_engine_specs
If you just comment out line 435, it should work. Unfortunately, I don't understand python well enough to tell you the exact reason.
I found this by brute force by running the source code and putting log statements
if (
backend_name == "hive"
# comment this line
# and "auth" in url.query.keys()
and impersonate_user is True
and username is not None
):
configuration["hive.server2.proxy.user"] = username
return configuration
Alternatively, if you do not want to modify the source code, you can modify the URL while creating the data source in superset as :
hive://<url>/default?auth=NONE ( when there is no security )
hive://<url>/default?auth=KERBEROS
hive://<url>/default?auth=LDAP

How get raw SQL from sequelize migraions

I have a bunch of Sequilize migration files. All looks like:
module.exports = {
up: //up migration
down: //down migration,
};
Is there a programmatically way to get SQL queries from that files? It will be ok to use Node ecosystem. The only requirement do that automatically.
Why do I want do this?
I wan't to create SQL migrations from javascript files to put them into entrypoint of my Postgres base image for local development. And I don't want to put Node.js with Sequelize into my image which depends only on Postgres official base image from Docker Hub.
If you already have a database with the right schema, all you need is the schema. You can use pg_dump command to get the schema
pg_dump.exe -U username -d databasename -s schemaname> myschema.sql
You can now import this schema
psql -d database_name -h localhost -U postgres < myschema.sql
I know you're asking how to get this programatically but just exposing the raw SQL is valuable. I was able to get the raw sql (sorting this out led me to this question) by adding the key logging to the options object.
This is my migration:
await queryInterface.addIndex(
constants.EVENTS_TABLE_NAME,
['created_at'],
{ using: 'brin', concurrently: true, logging: console.log }
);
and the output from the migration:
== 20220311183756-create-brin-index-on-created-at: migrating =======
Executing (default): CREATE INDEX CONCURRENTLY "events_created_at" ON "events" USING brin ("created_at")
== 20220311183756-create-brin-index-on-created-at: migrated (0.019s)
Here is an example from their docs:
await sequelize.query('SELECT 1', {
// A function (or false) for logging your queries
// Will get called for every SQL query that gets sent
// to the server.
logging: console.log,
// If plain is true, then sequelize will only return the first
// record of the result set. In case of false it will return all records.
plain: false,
// Set this to true if you don't have a model definition for your query.
raw: false,
// The type of query you are executing. The query type affects how results are formatted before they are passed back.
type: QueryTypes.SELECT
});

Apache Velocity log to console for debugging

Using Apache velocity in xwiki, how do I create a console.log() like one would in JavaScript? I know the log will probably be server side. I really just want to print the values of variables as it is rendered for debugging purposes.
I should add that the page I'm trying to debug is a form .post page, thus not rendered by its self, only returns data. Thus {{velocity output="false"}} mode, so simply printing the variable is not an option.
Since XWiki 6.1 you can use logging script service to get a standard logger:
$services.logging.getLogger('My script').info('Hello {}', 'world')
See http://extensions.xwiki.org/xwiki/bin/view/Extension/Logging+Module#HGetaLoggerfromscript for more details.
I had trouble to figure out what's the value for 'My Script'. Turns out the function getLogger() will take a logger_name as input parameter, where logger_name can be any of the logger name in ..WEB-INF/classes/logback.xml.
For example, this works for me: $services.logging.getLogger('org.xwiki').info('Hello {}', 'world')

Trying to get node-webkit console output formatted on terminal

Fairly new to node-webkit, so I'm still figuring out how everything works...
I have some logging in my app:
console.log("Name: %s", this.name);
It outputs to the browser console as expected:
Name: Foo
But in the invoking terminal, instead I get some fairly ugly output:
[7781:1115/085317:INFO:CONSOLE(43)] ""Name: %s" "Foo"", source: /file/path/to/test.js (43)
The numerical output within the brackets might be useful, but I don't know how to interpret it. The source info is fine. But I'd really like the printed string to be printf-style formatted, rather than shown as individual arguments.
So, is there a way to get stdout to be formatted either differently, or to call a custom output function of some sort so I can output the information I want?
I eventually gave up, and wrapped console.log() with:
log = function() {
console.log(util.format.apply(this, arguments));
}
The actual terminal console output is done via RenderFrameHostImpl::OnAddMessageToConsole in chromium, with the prefix info being generated via LogMessage::Init() in the format:
[pid:MMDD/HHMMSS:severity:filename(line)]
The javascript console.log is implemented in console.cc, via the Log() function. The printf style formatting is being done at a higher level, so that by the time the Log() function (or similar) are called, they are only passed a single string.
It's not a satisfying answer, but a tolerable workaround.
I was looking to create a command line interface to go along side my UI and had a similar problem. Along with not logging the values as I wanted I also wanted to get rid of the [pid:MMDD/HHMMSS:severity:filename(line)] output prefix so I added the following:
console.log = function (d) {
process.stdout.write(d + '\n');
};
so that console logging was set back to stdout without extra details. Unfortunately also a work around.

How to load sql fixture in Django for User model?

Does anyone knows how to load initial data for auth.User using sql fixtures?
For my models, I just got have a < modelname >.sql file in a folder named sql that syncdb does it's job beautifully. But I have no clue how to do it for the auth.User model. I've googled it, but with no success.
Thanks in advance,
Aldo
For SQL fixtures, you'd have to specifically have insert statements for the auth tables. You can find the schema of the auth tables with the command python manage.py sql auth.
The much easier and database-independent way (unless you have some additional SQL magic you want to run), is to just make a JSON or YAML fixture file in the fixtures directory of your app with data like this:
- model: auth.user
pk: 100000
fields:
first_name: Admin
last_name: User
username: admin
password: "<a hashed password>"
You can generate a hashed password quickly in a django shell
>>> from django.contrib.auth.models import User
>>> u = User()
>>> u.set_password('newpass')
>>> u.password
'sha1$e2fd5$96edae9adc8870fd87a65c051e7fdace6226b5a8'
This will get loaded whenever you run syncdb.
You are looking for loaddata:
manage.py loadata path/to/your/fixtureFile
But I think the command can only deal with files in XML, YAML, Python or JSON format (see here). To create such appropriate files, have a look at the dumpdata method.
Thanks for your answers. I've found the solution that works for me, and for coincidence was one of Brian's suggestion. Here it is:
Firs I disconnected the signal that created the Super User after syncdb, for I have my super user in my auth_user fixture:
models.py:
from django.db.models import signals
from django.contrib.auth.management import create_superuser
from django.contrib.auth import models as auth_app
signals.post_syncdb.disconnect(
create_superuser,
sender=auth_app,
dispatch_uid = "django.contrib.auth.management.create_superuser")
Then I created a signal to be called after syncdb:
< myproject >/< myapp >/management/__init__.py
"""
Loads fixtures for files in sql/<modelname>.sql
"""
from django.db.models import get_models, signals
from django.conf import settings
import <myproject>.<myapp>.models as auth_app
def load_fixtures(app, **kwargs):
import MySQLdb
db=MySQLdb.connect(host=settings.DATABASE_HOST or "localhost", \
user=settings.DATABASE_USER,
passwd=settings.DATABASE_PASSWORD, port=int(settings.DATABASE_PORT or 3306))
cursor = db.cursor()
try:
print "Loading fixtures to %s from file %s." % (settings.DATABASE_NAME, \
settings.FIXTURES_FILE)
f = open(settings.FIXTURES_FILE, 'r')
cursor.execute("use %s;" % settings.DATABASE_NAME)
for line in f:
if line.startswith("INSERT"):
try:
cursor.execute(line)
except Exception, strerror:
print "Error on loading fixture:"
print "-- ", strerror
print "-- ", line
print "Fixtures loaded"
except AttributeError:
print "FIXTURES_FILE not found in settings. Please set the FIXTURES_FILE in \
your settings.py"
cursor.close()
db.commit()
db.close()
signals.post_syncdb.connect(load_fixtures, sender=auth_app, \
dispatch_uid = "<myproject>.<myapp>.management.load_fixtures")
And in my settings.py I added FIXTURES_FILE with the path to my .sql file with the sql dump.
One thing that I still haven't found is how to fire this signal only after the tables are created, and not everytime syncdb is fired. A temporary work around for this is use INSERT IGNORE INTO in my sql command.
I know this solution is far from perfect, and critics/improvements/opinions are very welcome!
Regards,
Aldo
There is a trick for this: (tested on Django 1.3.1)
Solution:
python manage.py startapp auth_fix
mkdir auth_fix/fixtures
python manage.py dumpdata auth > auth_fixtures/fixtures/initial_data.json
Include auth_fix in INSTALLED_APPS inside settings.py
Next time you run python manage.py syncdb, Django will load the auth fixture automatically.
Explanation:
Just make an empty app to hold the fixtures folder. Leave __init__py, models.py and views.py in it so that Django recognizes it as an app and not just a folder.
Make the fixtures folder in the app.
python manage.py dumpdata auth will dump the "auth" data in the DB with all the Groups and Users information. The rest of the command simply redirects the output into a file called "initial_data.json" which is the one that Django looks for when you run "syncdb".
Just include auth_fix in INSTALLED_APPS inside settings.py.
This example shows how to do it in JSON but you can basically use the format of your choice.
If you happen to be doing database migrations with south, creating users is very simple.
First, create a bare data migration. It needs to be included in some application. If you have a common app where you place shared code, that would be a good choice. If you have an app where you concentrate user-related code, that would be even better.
$ python manage.py datamigration <some app name> add_users
The pertinent migration code might look something like this:
# encoding: utf-8
import datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
from django.contrib.auth.models import User
class Migration(DataMigration):
users = [
{
'username': 'nancy',
'email': 'nancy#example.com',
'password': 'nancypassword',
'staff': True,
'superuser': True
},
{
'username': 'joe',
'email': '',
'password': 'joepassword',
'staff': True,
'superuser': False
},
{
'username': 'susan',
'email': 'susan#example.com',
'password': 'susanpassword',
'staff': False,
'superuser': False
}
]
def forwards(self, orm):
"""
Insert User objects
"""
for i in Migration.users:
u = User.objects.create_user(i['username'], i['email'], i['password'])
u.is_staff = i['staff']
u.is_superuser = i['superuser']
u.save()
def backwards(self, orm):
"""
Delete only these users
"""
for i in Migration.users:
User.objects.filter(username=i['username']).delete()
Then simply run the migration and the auth users should be inserted.
$ python manage.py migrate <some app name>
An option is to import your auth.User SQL manually and subsequently dump it out to a standard Django fixture (name it initial_data if you want syncdb to find it). You can generally put this file into any app's fixtures dir since the fixtured data will all be keyed with the proper app_label. Or you can create an empty/dummy app and place it there.
Another option is to override the syncdb command and apply the fixture in a manner as you see fit.
I concur with Felix that there is no non-trivial natural hook in Django for populating contrib apps with SQL.
I simply added SQL statements into the custom sql file for another model. I chose my Employee model because it depends on auth_user.
The custom SQL I wrote actually reads from my legacy application and pulls user info from it, and uses REPLACE rather than INSERT (I'm using MySQL) so I can run it whenever I want.
And I put that REPLACE...SELECT statement in a procedure so that it's easy to run manually or scheduled with cron.