How to load sql fixture in Django for User model? - sql

Does anyone knows how to load initial data for auth.User using sql fixtures?
For my models, I just got have a < modelname >.sql file in a folder named sql that syncdb does it's job beautifully. But I have no clue how to do it for the auth.User model. I've googled it, but with no success.
Thanks in advance,
Aldo

For SQL fixtures, you'd have to specifically have insert statements for the auth tables. You can find the schema of the auth tables with the command python manage.py sql auth.
The much easier and database-independent way (unless you have some additional SQL magic you want to run), is to just make a JSON or YAML fixture file in the fixtures directory of your app with data like this:
- model: auth.user
pk: 100000
fields:
first_name: Admin
last_name: User
username: admin
password: "<a hashed password>"
You can generate a hashed password quickly in a django shell
>>> from django.contrib.auth.models import User
>>> u = User()
>>> u.set_password('newpass')
>>> u.password
'sha1$e2fd5$96edae9adc8870fd87a65c051e7fdace6226b5a8'
This will get loaded whenever you run syncdb.

You are looking for loaddata:
manage.py loadata path/to/your/fixtureFile
But I think the command can only deal with files in XML, YAML, Python or JSON format (see here). To create such appropriate files, have a look at the dumpdata method.

Thanks for your answers. I've found the solution that works for me, and for coincidence was one of Brian's suggestion. Here it is:
Firs I disconnected the signal that created the Super User after syncdb, for I have my super user in my auth_user fixture:
models.py:
from django.db.models import signals
from django.contrib.auth.management import create_superuser
from django.contrib.auth import models as auth_app
signals.post_syncdb.disconnect(
create_superuser,
sender=auth_app,
dispatch_uid = "django.contrib.auth.management.create_superuser")
Then I created a signal to be called after syncdb:
< myproject >/< myapp >/management/__init__.py
"""
Loads fixtures for files in sql/<modelname>.sql
"""
from django.db.models import get_models, signals
from django.conf import settings
import <myproject>.<myapp>.models as auth_app
def load_fixtures(app, **kwargs):
import MySQLdb
db=MySQLdb.connect(host=settings.DATABASE_HOST or "localhost", \
user=settings.DATABASE_USER,
passwd=settings.DATABASE_PASSWORD, port=int(settings.DATABASE_PORT or 3306))
cursor = db.cursor()
try:
print "Loading fixtures to %s from file %s." % (settings.DATABASE_NAME, \
settings.FIXTURES_FILE)
f = open(settings.FIXTURES_FILE, 'r')
cursor.execute("use %s;" % settings.DATABASE_NAME)
for line in f:
if line.startswith("INSERT"):
try:
cursor.execute(line)
except Exception, strerror:
print "Error on loading fixture:"
print "-- ", strerror
print "-- ", line
print "Fixtures loaded"
except AttributeError:
print "FIXTURES_FILE not found in settings. Please set the FIXTURES_FILE in \
your settings.py"
cursor.close()
db.commit()
db.close()
signals.post_syncdb.connect(load_fixtures, sender=auth_app, \
dispatch_uid = "<myproject>.<myapp>.management.load_fixtures")
And in my settings.py I added FIXTURES_FILE with the path to my .sql file with the sql dump.
One thing that I still haven't found is how to fire this signal only after the tables are created, and not everytime syncdb is fired. A temporary work around for this is use INSERT IGNORE INTO in my sql command.
I know this solution is far from perfect, and critics/improvements/opinions are very welcome!
Regards,
Aldo

There is a trick for this: (tested on Django 1.3.1)
Solution:
python manage.py startapp auth_fix
mkdir auth_fix/fixtures
python manage.py dumpdata auth > auth_fixtures/fixtures/initial_data.json
Include auth_fix in INSTALLED_APPS inside settings.py
Next time you run python manage.py syncdb, Django will load the auth fixture automatically.
Explanation:
Just make an empty app to hold the fixtures folder. Leave __init__py, models.py and views.py in it so that Django recognizes it as an app and not just a folder.
Make the fixtures folder in the app.
python manage.py dumpdata auth will dump the "auth" data in the DB with all the Groups and Users information. The rest of the command simply redirects the output into a file called "initial_data.json" which is the one that Django looks for when you run "syncdb".
Just include auth_fix in INSTALLED_APPS inside settings.py.
This example shows how to do it in JSON but you can basically use the format of your choice.

If you happen to be doing database migrations with south, creating users is very simple.
First, create a bare data migration. It needs to be included in some application. If you have a common app where you place shared code, that would be a good choice. If you have an app where you concentrate user-related code, that would be even better.
$ python manage.py datamigration <some app name> add_users
The pertinent migration code might look something like this:
# encoding: utf-8
import datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
from django.contrib.auth.models import User
class Migration(DataMigration):
users = [
{
'username': 'nancy',
'email': 'nancy#example.com',
'password': 'nancypassword',
'staff': True,
'superuser': True
},
{
'username': 'joe',
'email': '',
'password': 'joepassword',
'staff': True,
'superuser': False
},
{
'username': 'susan',
'email': 'susan#example.com',
'password': 'susanpassword',
'staff': False,
'superuser': False
}
]
def forwards(self, orm):
"""
Insert User objects
"""
for i in Migration.users:
u = User.objects.create_user(i['username'], i['email'], i['password'])
u.is_staff = i['staff']
u.is_superuser = i['superuser']
u.save()
def backwards(self, orm):
"""
Delete only these users
"""
for i in Migration.users:
User.objects.filter(username=i['username']).delete()
Then simply run the migration and the auth users should be inserted.
$ python manage.py migrate <some app name>

An option is to import your auth.User SQL manually and subsequently dump it out to a standard Django fixture (name it initial_data if you want syncdb to find it). You can generally put this file into any app's fixtures dir since the fixtured data will all be keyed with the proper app_label. Or you can create an empty/dummy app and place it there.
Another option is to override the syncdb command and apply the fixture in a manner as you see fit.
I concur with Felix that there is no non-trivial natural hook in Django for populating contrib apps with SQL.

I simply added SQL statements into the custom sql file for another model. I chose my Employee model because it depends on auth_user.
The custom SQL I wrote actually reads from my legacy application and pulls user info from it, and uses REPLACE rather than INSERT (I'm using MySQL) so I can run it whenever I want.
And I put that REPLACE...SELECT statement in a procedure so that it's easy to run manually or scheduled with cron.

Related

Multiple mongoDB related to same django rest framework project

We are having one django rest framework (DRF) project which should have multiple databases (mongoDB).Each databases should be independed. We are able to connect to one database, but when we are going to another DB for writing connection is happening but data is storing in DB which is first connected.
We changed default DB and everything but no changes.
(Note : Solution should be apt for the usage of serializer. Because we need to use DynamicDocumentSerializer in DRF-mongoengine.
Thanks in advance.
While running connect() just assign an alias for each of your databases and then for each Document specify a db_alias parameter in meta that points to a specific database alias:
settings.py:
from mongoengine import connect
connect(
alias='user-db',
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
connect(
alias='book-db'
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
models.py:
from mongoengine import Document
class User(Document):
name = StringField()
meta = {'db_alias': 'user-db'}
class Book(Document):
name = StringField()
meta = {'db_alias': 'book-db'}
I guess, I finally get what you need.
What you could do is write a really simple middleware that maps your url schema to the database:
from mongoengine import *
class DBSwitchMiddleware:
"""
This middleware is supposed to switch the database depending on request URL.
"""
def __init__(self, get_response):
# list all the mongoengine Documents in your project
import models
self.documents = [item for in dir(models) if isinstance(item, Document)]
def __call__(self, request):
# depending on the URL, switch documents to appropriate database
if request.path.startswith('/main/project1'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db1'
elif request.path.startswith('/main/project2'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db2'
# delegate handling the rest of response to your views
response = get_response(request)
return response
Note that this solution might be prone to race conditions. We're modifying a Documents globally here, so if one request was started and then in the middle of its execution a second request is handled by the same python interpreter, it will overwrite document.cls._meta['db_alias'] setting and first request will start writing to the same database, which will break your database horribly.
Same python interpreter is used by 2 request handlers, if you're using multithreading. So with this solution you can't start your server with multiple threads, only with multiple processes.
To address the threading issues, you can use threading.local(). If you prefer context manager approach, there's also a contextvars module.

No such table Django Database

So I created a model for storing credentials from Gmail users.
I wanted to make migrations but it says that there is no such table:
django.db.utils.OperationalError: no such table: mainApp_credentialsmodel
My models:
from django.db import models
# Create your models here.
from django.contrib.auth.models import User
from django.db import models
import json
class CredentialsModel(models.Model):
id = models.ForeignKey(User, primary_key=True,on_delete=models.CASCADE)
credential = models.CharField(max_length=1000)
Calling that model for checking authorization:
SCOPES = 'https://www.googleapis.com/auth/gmail.readonly'
store = CredentialsModel.objects.all()
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('mainApp/client_secret.json', SCOPES)
creds = tools.run_flow(flow, store)
service = build('gmail', 'v1', http=creds.authorize(Http()))
python manage.py makemigrations
If that error keep happening, check your migrations folder and check the files inside. Also check If your database is online, in case you have a database online, I've got this problem last week, but it was a problem with azure.
In last case I would create the table (model) again, changing the name to something similar, but If you have a significant amount of data in that table, then I think you can't do that.
It looks like your authorization code - including the query on CredentialsModel - is at module level. This means it runs when the module is imported, which happens before the migration has had a chance to run.
You must ensure that any database-accessing code is inside a function or method and is not invoked globally.

Django 1.11 - how to print SQL statements with substituted variables

This question is different than log all sql queries
I tried logging configurations from the answers above they are not working as I would like them to work, so please read on.
What I want to do, is to make Django (1.11.x) debug server log SQL queries in such a way, that I can redirect them to *.sql file and immediately execute.
For this I need a SQL statements where all the variables are already substituted, so I DON'T want this:
WHERE some_column in (:arg1, :arg2, ...)
but I want this instead:
WHERE some_column in ('actual_value_1', 'actual_value2', ...)
Can you please help me figure out how to do this?
Please note, that I don't want the SQL query to be printed in the browser (in some debug app like django_debug_toolbar) but printed to the console.
Please note, that I don't want to type Django QuerySet queries in console - I want to type URL in browser, so make an actual HTTP request to Django debug server and see it printing a SQL query in such a way I can execute it later using SQL Plus or any other console tools.
I think the tool I just made would be perfect for you.
https://pypi.python.org/pypi/django-print-sql
https://github.com/rabbit-aaron/django-print-sql
Install like this:
pip install --upgrade django-print-sql
Use like this in your view function:
from django_print_sql import print_sql
Class MyView(View):
def get(self, request):
with print_sql():
User.objects.get(id=request.user.id)
# and more....
This is a bit of a hack, because we have to strip the default django.db.backends log messages of the time taken and the args. After making these changes you should have a file of pure SQL that you are free to run as you wish...
First you want to set up your logging settings to reference our new log handler
Settings:
LOGGING = {
'version': 1,
'handlers': {
'sql': {
'level': 'DEBUG',
'class': 'my_project.loggers.DjangoSQLFileHandler',
'filename': '/path/to/sql.log'
},
},
'loggers': {
'django.db.backends': {
'handlers': ['sql'],
'level': 'DEBUG'
}
}
}
Then you need to define your new handler and strip out the unwanted text from the message.
my_project.loggers.py:
from logging import FileHandler
class DjangoSQLFileHandler(FileHandler):
def getMessage(self):
msg = super(DjangoSQLFileHandler, self).getMessage()
return record.msg.split(') ', 1)[1].split(';', 1)[0] + ';'

Flask + SQLAlchemy + pytest - not rolling back my session

There are several similar questions on stack overflow, and I apologize in advance if I'm breaking etiquette by asking another one, but I just cannot seem to come up with the proper set of incantations to make this work.
I'm trying to use Flask + Flask-SQLAlchemy and then use pytest to manage the session such that when the function-scoped pytest fixture is torn down, the current transation is rolled back.
Some of the other questions seem to advocate using the db "drop all and create all" pytest fixture at the function scope, but I'm trying to use the joined session, and use rollbacks, since I have a LOT of tests. This would speed it up considerably.
http://alexmic.net/flask-sqlalchemy-pytest/ is where I found the original idea, and Isolating py.test DB sessions in Flask-SQLAlchemy is one of the questions recommending using function-level db re-creation.
I had also seen https://github.com/mitsuhiko/flask-sqlalchemy/pull/249 , but that appears to have been released with flask-sqlalchemy 2.1 (which I am using).
My current (very small, hopefully immediately understandable) repo is here:
https://github.com/hoopes/flask-pytest-example
There are two print statements - the first (in example/__init__.py) should have an Account object, and the second (in test/conftest.py) is where I expect the db to be cleared out after the transaction is rolled back.
If you pip install -r requirements.txt and run py.test -s from the test directory, you should see the two print statements.
I'm about at the end of my rope here - there must be something I'm missing, but for the life of me, I just can't seem to find it.
Help me, SO, you're my only hope!
You might want to give pytest-flask-sqlalchemy-transactions a try. It's a plugin that exposes a db_session fixture that accomplishes what you're looking for: allows you to run database updates that will get rolled back when the test exits. The plugin is based on Alex Michael's blog post, with some additional support for nested transactions that covers a wider array of user cases. There are also some configuration options for mocking out connectibles in your app so you can run arbitrary methods from your codebase, too.
For test_accounts.py, you could do something like this:
from example import db, Account
class TestAccounts(object):
def test_update_view(self, db_session):
test_acct = Account(username='abc')
db_session.add(test_acct)
db_session.commit()
resp = self.client.post('/update',
data={'a':1},
content_type='application/json')
assert resp.status_code == 200
The plugin needs access to your database through a _db fixture, but since you already have a db fixture defined in conftest.py, you can set up database access easily:
#pytest.fixture(scope='session')
def _db(db):
return db
You can find detail on how to setup and installation in the docs. Hope this helps!
I'm also having issues with the rollback, my code can be found here
After reading some documentation, it seems the begin() function should be called on the session.
So in your case I would update the session fixture to this:
#pytest.yield_fixture(scope='function', autouse=True)
def session(db, request):
"""Creates a new database session for a test."""
db.session.begin()
yield db.session
db.session.rollback()
db.session.remove()
I didn't test this code, but when I try it on my code I get the following error:
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "./venv/lib/python2.7/site-packages/_pytest/main.py", line 90, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
...
INTERNALERROR> File "./venv/lib/python2.7/site-packages/_pytest/python.py", line 59, in filter_traceback
INTERNALERROR> return entry.path != cutdir1 and not entry.path.relto(cutdir2)
INTERNALERROR> AttributeError: 'str' object has no attribute 'relto'
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
from unittest import TestCase
# global application scope. create Session class, engine
Session = sessionmaker()
engine = create_engine('postgresql://...')
class SomeTest(TestCase):
def setUp(self):
# connect to the database
self.connection = engine.connect()
# begin a non-ORM transaction
self.trans = self.connection.begin()
# bind an individual Session to the connection
self.session = Session(bind=self.connection)
def test_something(self):
# use the session in tests.
self.session.add(Foo())
self.session.commit()
def tearDown(self):
self.session.close()
# rollback - everything that happened with the
# Session above (including calls to commit())
# is rolled back.
self.trans.rollback()
# return connection to the Engine
self.connection.close()
sqlalchemy doc has solution for the case

How to write a Python script that uses the OpenERP ORM to directly upload to Postgres Database

I need to write a "standalone" script in Python to upload sales taxes to the account_tax table in the database using ONLY the ORM module of OpenERP. What I would like to do is something like the pseudo code below.
Can someone provide me a more details on the following:
1) what sys.path's do I need to set
2) what modules do I need to import before importing the "account" module. Currently when I import the "account" module I get the following error:
AssertionError: The report "report.custom" already exists!
3) What is the proper way to get my database cursor. In the code below I am simply calling psycopg2 directly to get a cursor.
If this approach cannot work, can anyone suggest an alternative approach other than writing XML files to load the data from the OpenERP application itself. This process needs to run outside of the the standard OpenERP application.
PSEUDO CODE:
import sys
# set Python paths to access openerp modules
sys.path.append("./openerp")
sys.path.append("./openerp/addons")
# import OpenERP
import openerp
# import the account addon modules that contains the tables
# to be populated.
import account
# define connection string
conn_string2 = "dbname='test2' user='xyz' password='password'"
# get a db connection
conn = psycopg2.connect(conn_string2)
# conn.cursor() will return a cursor object
cursor = conn.cursor()
# and finally use the ORM to insert data into table.
If you wanna do it via web service then have look at the OpenERP XML-RPC Web services
Example code top work with OpenERP Web Services :
import xmlrpclib
username = 'admin' #the user
pwd = 'admin' #the password of the user
dbname = 'test' #the database
# OpenERP Common login Service proxy object
sock_common = xmlrpclib.ServerProxy ('http://localhost:8069/xmlrpc/common')
uid = sock_common.login(dbname, username, pwd)
#replace localhost with the address of the server
# OpenERP Object manipulation service
sock = xmlrpclib.ServerProxy('http://localhost:8069/xmlrpc/object')
partner = {
'name': 'Fabien Pinckaers',
'lang': 'fr_FR',
}
#calling remote ORM create method to create a record
partner_id = sock.execute(dbname, uid, pwd, 'res.partner', 'create', partner)
More clearly you can also use the OpenERP Client lib
Example Code with client lib :
import openerplib
connection = openerplib.get_connection(hostname="localhost", database="test", \
login="admin", password="admin")
user_model = connection.get_model("res.users")
ids = user_model.search([("login", "=", "admin")])
user_info = user_model.read(ids[0], ["name"])
print user_info["name"]
You see both way are good but when you use the client lib, code is less and easy to understand while using xmlrpc proxy is lower level calls that you will handle
Hope this will help you.
As per my view one must go for XMLRPC or NETSVC services provided by Open ERP for such needs.
You don't need to import accounts module of Open ERP, there are possibilities that other modules have inherited accounts.tax object and had altered its behaviour as per your business needs.
Eventually if you feed data by calling those methods manually without using Open ERP Web service its possible you'll get undesired result / unexpected failures / inconsistent database state.
You can use Erppeek to browse data, but not sure if you can really upload data to DB, personally I use/prefer XMLRPC
Why don't you use the xmlrpc call of openerp.
it will not need to import account or openerp . and even you can have all orm functionality.
You can use python library to access openerp server using xmlrpc service.
Please check https://github.com/OpenERP/openerp-client-lib
It is officially supported by OpenERP SA.
If you want to interacti directly with the DB, you could just import psycopg2 and:
conn = psycopg2.connect(dbname='dbname', user='dbuser', password='dbpassword', host='dbhost')
cur = conn.cursor()
cur.execute('select * from table where id = %d' % table_id)
cur.execute('insert into table(column1, column2) values(%d, %d)' % (value1, value2))
cur.close()
conn.close()
Why you want to fix it like that?! You should create a localization module and define data in XML files. This is the standard way to fix such a problem in OpenERP.
You want to insert sales taxes for which country? Explain more plz.
from openerp.modules.registry import RegistryManager
registry = RegistryManager.get("databasename")
with registry.cursor() as cr:
user = registry.get('res.users').browse(cr, userid, listids)
print user