In SQL Alchemy with multiple databases, how to log the db connection creation info at any point in the application? - flask-sqlalchemy

I am new to SQL alchemy. Can anyone please help me with this
In SQL Alchemy with multiple databases, how to log the db connection creation info ( active connections) at any point in the application ?
In case of multiple databases, we use the bind_key in the model class to specify the schema name for that specific table.
from flask_sqlalchemy import SQLAlchemy
we call SQLAlchemy() , which refers the SQLALCHEMY_BINDS = { } values .
#FranckGamess
Model class 1
from model import db
class Pqr_W( db.Model):
__bind_key__ = 'dbpqr’
tablename = ‘table_name_pqr’
Properties file entry ( settings.py)
SQLALCHEMY_BINDS = {
'dbxyz': 'mysql://root#localhost/xyz_portal',
'dbpqr’: ‘oracle+cx_oracle://root#localhost/xyz_portal',
}
Model class 2
from model import db
class Xyz_W( db.Model):
__bind_key__ = 'dbxyzr’
__tablename__ = ‘table_name_xyz’
Model.py
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()

not sure what you want to do, you can use SQLALCHEMY_ECHO = True which can log all the statements issued to stderr which can be useful for debugging.
FYI :http://flask-sqlalchemy.pocoo.org/2.3/config/

Related

No such table Django Database

So I created a model for storing credentials from Gmail users.
I wanted to make migrations but it says that there is no such table:
django.db.utils.OperationalError: no such table: mainApp_credentialsmodel
My models:
from django.db import models
# Create your models here.
from django.contrib.auth.models import User
from django.db import models
import json
class CredentialsModel(models.Model):
id = models.ForeignKey(User, primary_key=True,on_delete=models.CASCADE)
credential = models.CharField(max_length=1000)
Calling that model for checking authorization:
SCOPES = 'https://www.googleapis.com/auth/gmail.readonly'
store = CredentialsModel.objects.all()
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('mainApp/client_secret.json', SCOPES)
creds = tools.run_flow(flow, store)
service = build('gmail', 'v1', http=creds.authorize(Http()))
python manage.py makemigrations
If that error keep happening, check your migrations folder and check the files inside. Also check If your database is online, in case you have a database online, I've got this problem last week, but it was a problem with azure.
In last case I would create the table (model) again, changing the name to something similar, but If you have a significant amount of data in that table, then I think you can't do that.
It looks like your authorization code - including the query on CredentialsModel - is at module level. This means it runs when the module is imported, which happens before the migration has had a chance to run.
You must ensure that any database-accessing code is inside a function or method and is not invoked globally.

How to specify alternate datasource when doing raw SQL queries in Grails 2.3.x?

We have a reporting read only database clone set up as an alternate datasource in our Grails application named 'reporting'. This works great when using dynamic finders or criteria as per the grails MyDomain.reporting.findByXXXX(..etc..)
However there are some nasty queries that have to be done in raw SQL. Our current way of doing this (in a service) is
def sessionFactory;
public static List getSomeBigNastyData(...)
{
sessionFactory.currentSession.createSQLQuery(
"""
Big Ugly Query
"""
).list();
}
But this does not go to the reporting database and there doesn't seem to be a way of specifying 'reporting' - is there a way to specify the datasource to execute raw SQL against?
It's possible to use the dataSource as an injected bean and groovy.sql.Sql to run your queries. Below is a simple example of a service that will use your data source and allow you to run a query against it.
package com.example
import groovy.sql.GroovyRowResult
import groovy.sql.Sql
class ExampleSqlService {
def dataSource_reporting // your named data source
List<GroovyRowResult> query(String sql) {
def db = new Sql(dataSource_reporting)
return db.rows(sql)
}
}
Using a service (like the above example) allows you to access it from basically anywhere (Controller, Service, TagLib, Domain, etc.)

How do I seed a flask sql-alchemy database

I am new at python, I just learnt how to create an api using flask restless and flask sql-alchemy. I however would like to seed the database with random values. How do I achieve this? Please help.
Here is the api code...
import flask
import flask.ext.sqlalchemy
import flask.ext.restless
import datetime
DATABASE = 'sqlite:///tmp/test.db'
#Create the Flask application and the FLask-SQLALchemy object
app = flask.Flask(__name__)
app.config ['DEBUG'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = DATABASE
db = flask.ext.sqlalchemy.SQLAlchemy(app)
#create Flask-SQLAlchemy models
class TodoItem(db.Model):
id = db.Column(db.Integer, primary_key = True)
todo = db.Column(db.Unicode)
priority = db.Column(db.SmallInteger)
due_date = db.Column(db.Date)
#Create database tables
db.create_all()
#Create Flask restless api manager
manager = flask.ext.restless.APIManager(app, flask_sqlalchemy_db = db)
#Create api end points
manager.create_api(TodoItem, methods = ['GET','POST','DELETE','PUT'], results_per_page = 20)
#Start flask loop
app.run()
I had a similar question and did some research, found something that worked.
The pattern I am seeing is based on registering a Flask CLI custom command, something like: flask seed.
This would look like this given your example. First, import the following into your api code file (let's say you have it named server.py):
from flask.cli import with_appcontext
(I see you do import flask but I would just add you should change these to from flask import what_you_need)
Next, create a function that does the seeding for your project:
#with_appcontext
def seed():
"""Seed the database."""
todo1 = TodoItem(...).save()
todo2 = TodoItem(...).save()
todo3 = TodoItem(...).save()
Finally, register these command with your flask application:
def register_commands(app):
"""Register CLI commands."""
app.cli.add_command(seed)
After you've configured you're application, make sure you call register_commands to register the commands:
register_commands(app)
At this point, you should be able to run: flask seed. You can add more functions (maybe a flask reset) using the same pattern.
From another newbie, the forgerypy and forgerypy3 libraries are available for this purpose (though they look like they haven't been touched in a bit).
A simple example of how to use them by adding them to your model:
class TodoItem(db.Model):
....
#staticmethod
def generate_fake_data(records=10):
import forgery_py
from random import randint
for record in records:
todo = TodoItem(todo=forgery_py.lorem_ipsum.word(),
due_date=forgery_py.date.date(),
priority=randint(1,4))
db.session.add(todo)
try:
db.session.commit()
except:
db.session.rollback()
You would then call the generate_fake_data method in a shell session.
And Miguel Grinberg's Flask Web Development (the O'Reilly book, not blog) chapter 11 is a good resource for this.

Circular import of db reference using Flask-SQLAlchemy and Blueprints

I am using Flask-SQLAlchemy and Blueprints and I cannot help myself from using circular imports.
I know I can write imports inside functions and make it work but it sounds nasty, I'd like to confirm with the community if there is a better way to do this.
The problem is I have a module (blueprints.py) where I declare the database and import the blueprints but those blueprints need to import the database declaration at the same time.
This is the code (excerpt of the important parts):
application.apps.people.views.py
from application.blueprints import db
people = Blueprint('people', __name__,
template_folder='templates',
static_folder='static')
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True)
#people.route('/all')
def all():
users = User.query.all()
application.blueprints.py
from application.apps.people.views import people
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////tmp/test.db'
db = SQLAlchemy(app)
app.register_blueprint(people, url_prefix='/people')
I have read the documentation and the questions I found on this topic, but I still cannot find the answer I am looking for.
I have found this chapter (https://pythonhosted.org/Flask-SQLAlchemy/contexts.html) where it suggest to put the initialization code inside a method but the circular import still persist.
Edit
I fixed the problem using the pattern Application Factory
I fixed the problem with the help of the Application Factory pattern. I declare the database in a third module and configure it later in the same module in which I start the application.
This results in the following imports:
database.py → app.py
views.py → app.py
database.py → views.py
There is no circular import. It is important to make sure that the application was started and configured before calling database operations.
Here is an example application:
app.py
from database import db
from flask import Flask
import os.path
from views import User
from views import people
def create_app():
app = Flask(__name__)
app.config['DEBUG'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = "sqlite:////tmp/test.db"
db.init_app(app)
app.register_blueprint(people, url_prefix='')
return app
def setup_database(app):
with app.app_context():
db.create_all()
user = User()
user.username = "Tom"
db.session.add(user)
db.session.commit()
if __name__ == '__main__':
app = create_app()
# Because this is just a demonstration we set up the database like this.
if not os.path.isfile('/tmp/test.db'):
setup_database(app)
app.run()
database.py
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
views.py
from database import db
from flask.blueprints import Blueprint
people = Blueprint('people', __name__,
template_folder='templates',
static_folder='static')
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True)
#people.route('/')
def test():
user = User.query.filter_by(username="Tom").first()
return "Test: Username %s " % user.username
Circular imports in Flask are driving me nuts. From the docs: http://flask.pocoo.org/docs/0.10/patterns/packages/
... Be advised that this is a bad idea in general but here it is actually fine.
It is not fine. It is deeply wrong. I also consider putting any code in __init__.py as a bad practice. It makes the application harder to scale. Blueprints is a way to alleviate the problem with circular imports. I think Flask needs more of this.
I know this has been solved already, but I solved this in a slightly different way and wanted to answer in case it helps others.
Originally, my application code (e.g. my_app.py) had this line:
db = SQLAlchemy(app)
And so in my models.py, I had:
from my_app import db
class MyModel(db.Model):
# etc
hence the circular references when using MyModel in my_app.
I updated this so that models.py had this:
# models.py
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy() # note no "app" here, and no import from my_app above
class MyModel(db.Model):
# etc as before
and then in my_app.py:
# my_app.py
from models import db, MyModel # importing db is new
# ...
db.init_app(app) # call init_app here rather than initialising db here
Serge, bring out definition of models in a separate file called models.py.
Register blueprint in __init__.py file of the package.
You've got circular import because blueprint file trying to import people reference from views.py, but in views.py you're trying to import db from blueprints.py. And all of this is done at the top level of the modules.
You can make your project structure like this:
app
__init__.py # registering of blueprints and db initialization
mods
__init__.py
people
__init__.py # definition of module (blueprint)
views.py # from .models import User
models.py # from app import db
UPD:
For those who are in the tank:
people/__init__.py --> mod = Module('app.mods.people', 'people')
people/views.py --> #mod.route('/page')
app/__init__.py --> from app.mods import people; from app.mods.people import views; app.register_blueprint(people.mod, **options);

How to write a Python script that uses the OpenERP ORM to directly upload to Postgres Database

I need to write a "standalone" script in Python to upload sales taxes to the account_tax table in the database using ONLY the ORM module of OpenERP. What I would like to do is something like the pseudo code below.
Can someone provide me a more details on the following:
1) what sys.path's do I need to set
2) what modules do I need to import before importing the "account" module. Currently when I import the "account" module I get the following error:
AssertionError: The report "report.custom" already exists!
3) What is the proper way to get my database cursor. In the code below I am simply calling psycopg2 directly to get a cursor.
If this approach cannot work, can anyone suggest an alternative approach other than writing XML files to load the data from the OpenERP application itself. This process needs to run outside of the the standard OpenERP application.
PSEUDO CODE:
import sys
# set Python paths to access openerp modules
sys.path.append("./openerp")
sys.path.append("./openerp/addons")
# import OpenERP
import openerp
# import the account addon modules that contains the tables
# to be populated.
import account
# define connection string
conn_string2 = "dbname='test2' user='xyz' password='password'"
# get a db connection
conn = psycopg2.connect(conn_string2)
# conn.cursor() will return a cursor object
cursor = conn.cursor()
# and finally use the ORM to insert data into table.
If you wanna do it via web service then have look at the OpenERP XML-RPC Web services
Example code top work with OpenERP Web Services :
import xmlrpclib
username = 'admin' #the user
pwd = 'admin' #the password of the user
dbname = 'test' #the database
# OpenERP Common login Service proxy object
sock_common = xmlrpclib.ServerProxy ('http://localhost:8069/xmlrpc/common')
uid = sock_common.login(dbname, username, pwd)
#replace localhost with the address of the server
# OpenERP Object manipulation service
sock = xmlrpclib.ServerProxy('http://localhost:8069/xmlrpc/object')
partner = {
'name': 'Fabien Pinckaers',
'lang': 'fr_FR',
}
#calling remote ORM create method to create a record
partner_id = sock.execute(dbname, uid, pwd, 'res.partner', 'create', partner)
More clearly you can also use the OpenERP Client lib
Example Code with client lib :
import openerplib
connection = openerplib.get_connection(hostname="localhost", database="test", \
login="admin", password="admin")
user_model = connection.get_model("res.users")
ids = user_model.search([("login", "=", "admin")])
user_info = user_model.read(ids[0], ["name"])
print user_info["name"]
You see both way are good but when you use the client lib, code is less and easy to understand while using xmlrpc proxy is lower level calls that you will handle
Hope this will help you.
As per my view one must go for XMLRPC or NETSVC services provided by Open ERP for such needs.
You don't need to import accounts module of Open ERP, there are possibilities that other modules have inherited accounts.tax object and had altered its behaviour as per your business needs.
Eventually if you feed data by calling those methods manually without using Open ERP Web service its possible you'll get undesired result / unexpected failures / inconsistent database state.
You can use Erppeek to browse data, but not sure if you can really upload data to DB, personally I use/prefer XMLRPC
Why don't you use the xmlrpc call of openerp.
it will not need to import account or openerp . and even you can have all orm functionality.
You can use python library to access openerp server using xmlrpc service.
Please check https://github.com/OpenERP/openerp-client-lib
It is officially supported by OpenERP SA.
If you want to interacti directly with the DB, you could just import psycopg2 and:
conn = psycopg2.connect(dbname='dbname', user='dbuser', password='dbpassword', host='dbhost')
cur = conn.cursor()
cur.execute('select * from table where id = %d' % table_id)
cur.execute('insert into table(column1, column2) values(%d, %d)' % (value1, value2))
cur.close()
conn.close()
Why you want to fix it like that?! You should create a localization module and define data in XML files. This is the standard way to fix such a problem in OpenERP.
You want to insert sales taxes for which country? Explain more plz.
from openerp.modules.registry import RegistryManager
registry = RegistryManager.get("databasename")
with registry.cursor() as cr:
user = registry.get('res.users').browse(cr, userid, listids)
print user