I am changing an API created in a flask to FastAPI, but I don't know how to change the following code.
Specifically I want to get guidance on how I can get db defined below in fastapi so that I don't have to change my downstream code which uses db.session.query. In short, I am looking for fastapi version for below code.
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
Thanks !
You can follow the offcial fastapi guide regarding orm bests practices. Fastapi is not tiedup with any orm or db client so you can implement sqlalchemy same way than flask (with fastapi you can do async db or orm calls wich is nice)
Taken from the doc:
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
SQLALCHEMY_DATABASE_URL = "sqlite:///./sql_app.db"
# SQLALCHEMY_DATABASE_URL = "postgresql://user:password#postgresserver/db"
engine = create_engine(
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
A more complexe project that implement async sqlalchemy calls and crud interface
Related
I have a python module (odoo.py) that creates a connection with an Odoo instance.
import odoorpc
from .env import ENV
ODOO = odoorpc.ODOO(ENV["ODOO_HOST"], port=ENV["ODOO_PORT"])
ODOO.login(db=ENV["ODOO_DB"], login=ENV["ODOO_USR"], password=ENV["ODOO_PWD"])
ODOO constant is imported by several modules containing Mongoengine classes to perform operations on the Odoo server. Here is an example:
from mongoengine import *
from .odoo import ODOO
class Category(DynamicDocument):
name = StringField(required=True)
odoo_id = StringField()
def publish(self):
_model = ODOO.env["product.category"]
self.odoo_id = _model.create({"name": self.name})
self.save()
Problem is that importing odoo.py takes about 10 seconds during application startup and it's REALLY annoying in development.
Let's consider this pseudo main.py:
import sys
from .category import Category
cat = Category(name=sys.argv[1])
if not 'test' in cat.name:
cat.publish()
In this example if sys.argv[1] contains the string 'test' then ODOO constant will never be used and waiting for its import would be totally useless.
Is there a way to "background" importing of odoo.py and wait for it only when ODOO constant is actually needed?
I am trying to make sense of the following error that I started getting when I setup my python code to run on a VM server, which has 3.9.5 installed instead of 3.8.5 on my desktop. Not sure that matters, but it could be part of the reason.
The error
C:\ProgramData\Miniconda3\lib\site-packages\pandas\io\sql.py:758: UserWarning: pandas only support SQLAlchemy connectable(engine/connection) or
database string URI or sqlite3 DBAPI2 connection
other DBAPI2 objects are not tested, please consider using SQLAlchemy
warnings.warn(
This is within a fairly simple .py file that imports pyodbc & sqlalchemy fwiw. A fairly generic/simple version of sql calls that yields the warning is:
myserver_string = "xxxxxxxxx,nnnn"
db_string = "xxxxxx"
cnxn = "Driver={ODBC Driver 17 for SQL Server};Server=tcp:"+myserver_string+";Database="+db_string +";TrustServerCertificate=no;Connection Timeout=600;Authentication=ActiveDirectoryIntegrated;"
def readAnyTable(tablename, date):
conn = pyodbc.connect(cnxn)
query_result = pd.read_sql_query(
'''
SELECT *
FROM [{0}].[dbo].[{1}]
where Asof >= '{2}'
'''.format(db_string,tablename,date,), conn)
conn.close()
return query_result
All the examples I have seen using pyodbc in python look fairly similar. Is pyodbc becoming deprecated? Is there a better way to achieve similar results without warning?
Is pyodbc becoming deprecated?
No. For at least the last couple of years pandas' documentation has clearly stated that it wants either a SQLAlchemy Connectable (i.e., an Engine or Connection object) or a SQLite DBAPI connection. (The switch-over to SQLAlchemy was almost universal, but they continued supporting SQLite connections for backwards compatibility.) People have been passing other DBAPI connections (like pyodbc Connection objects) for read operations and pandas hasn't complained … until now.
Is there a better way to achieve similar results without warning?
Yes. You can take your existing ODBC connection string and use it to create a SQLAlchemy Engine object as described in the SQLAlchemy 1.4 documentation:
from sqlalchemy.engine import URL
connection_string = "DRIVER={ODBC Driver 17 for SQL Server};SERVER=dagger;DATABASE=test;UID=user;PWD=password"
connection_url = URL.create("mssql+pyodbc", query={"odbc_connect": connection_string})
from sqlalchemy import create_engine
engine = create_engine(connection_url)
Then pass engine to the pandas methods you need to use.
It works for me.
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import pyodbc
import sqlalchemy as sa
import urllib
from sqlalchemy import create_engine, event
from sqlalchemy.engine.url import URL
server = 'IP ADDRESS or Server Name'
database = 'AdventureWorks2014'
username = 'xxx'
password = 'xxx'
params = urllib.parse.quote_plus("DRIVER={SQL Server};"
"SERVER="+server+";"
"DATABASE="+database+";"
"UID="+username+";"
"PWD="+password+";")
engine = sa.create_engine("mssql+pyodbc:///?odbc_connect={}".format(params))
qry = "SELECT t.[group] as [Region],t.name as [Territory],C.[AccountNumber]"
qry = qry + "FROM [Sales].[Customer] C INNER JOIN [Sales].SalesTerritory t on t.TerritoryID = c.TerritoryID "
qry = qry + "where StoreID is not null and PersonID is not null"
with engine.connect() as con:
rs = con.execute(qry)
for row in rs:
print (row)
You can use the SQL Server name or the IP address, but this requires a basic DNS listing. Most corporate servers should already have this listing though. You can check the server name or IP address using the nslookup command in the command prompt followed by the server name or IP address.
I'm using SQL 2017 on Ubuntu server running on VMWare. I'm connecting with IP Address here as part of a wider "running MSSQL on Ubuntu" project.
If you are connecting with your Windows credentials, you can replace the params with the trusted_connection parameter.
params = urllib.parse.quote_plus("DRIVER={SQL Server};"
"SERVER="+server+";"
"DATABASE="+database+";"
"trusted_connection=yes")
since its a warning, I suppressed the message using the warnings python library. Hope this helps
import warnings
with warnings.catch_warnings(record=True):
warnings.simplefilter("always")
#your code goes here
My company doesn't use SQLAlchemy, preferring to use postgres connections based on pscycopg2 and incorporating other features. If you can run your script directly from a command line, then turning warnings off will solve the problem: start it with python3 -W ignore
The correct way to import for SQLAlchemy 1.4.36 is using:
import pandas as pd
from sqlalchemy import create_engine, event
from sqlalchemy.engine.url import URL
#...
conn_str = set_db_info() # see above
conn_url = URL.create("mssql+pyodbc", query={"odbc_connect": conn_str})
engine = create_engine(conn_url)
df = pd.read_sql(SQL, engine)
df.head()
I am new at python, I just learnt how to create an api using flask restless and flask sql-alchemy. I however would like to seed the database with random values. How do I achieve this? Please help.
Here is the api code...
import flask
import flask.ext.sqlalchemy
import flask.ext.restless
import datetime
DATABASE = 'sqlite:///tmp/test.db'
#Create the Flask application and the FLask-SQLALchemy object
app = flask.Flask(__name__)
app.config ['DEBUG'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = DATABASE
db = flask.ext.sqlalchemy.SQLAlchemy(app)
#create Flask-SQLAlchemy models
class TodoItem(db.Model):
id = db.Column(db.Integer, primary_key = True)
todo = db.Column(db.Unicode)
priority = db.Column(db.SmallInteger)
due_date = db.Column(db.Date)
#Create database tables
db.create_all()
#Create Flask restless api manager
manager = flask.ext.restless.APIManager(app, flask_sqlalchemy_db = db)
#Create api end points
manager.create_api(TodoItem, methods = ['GET','POST','DELETE','PUT'], results_per_page = 20)
#Start flask loop
app.run()
I had a similar question and did some research, found something that worked.
The pattern I am seeing is based on registering a Flask CLI custom command, something like: flask seed.
This would look like this given your example. First, import the following into your api code file (let's say you have it named server.py):
from flask.cli import with_appcontext
(I see you do import flask but I would just add you should change these to from flask import what_you_need)
Next, create a function that does the seeding for your project:
#with_appcontext
def seed():
"""Seed the database."""
todo1 = TodoItem(...).save()
todo2 = TodoItem(...).save()
todo3 = TodoItem(...).save()
Finally, register these command with your flask application:
def register_commands(app):
"""Register CLI commands."""
app.cli.add_command(seed)
After you've configured you're application, make sure you call register_commands to register the commands:
register_commands(app)
At this point, you should be able to run: flask seed. You can add more functions (maybe a flask reset) using the same pattern.
From another newbie, the forgerypy and forgerypy3 libraries are available for this purpose (though they look like they haven't been touched in a bit).
A simple example of how to use them by adding them to your model:
class TodoItem(db.Model):
....
#staticmethod
def generate_fake_data(records=10):
import forgery_py
from random import randint
for record in records:
todo = TodoItem(todo=forgery_py.lorem_ipsum.word(),
due_date=forgery_py.date.date(),
priority=randint(1,4))
db.session.add(todo)
try:
db.session.commit()
except:
db.session.rollback()
You would then call the generate_fake_data method in a shell session.
And Miguel Grinberg's Flask Web Development (the O'Reilly book, not blog) chapter 11 is a good resource for this.
Ive been going at this for several hours but im afraid I still don't gronk flask app context and how my app should be implemented with Blueprints.
Ive taken a look at the this and this and have tried a few different recommendations but there must be something wrong with my basic approach.
I have one 'main' blueprint setup under the following PJ structure:
project/
app/
main/
__init__.py
routes.py
forms.py
helper.py
admin/
static/
templates/
__init__.py
models.py
app/init.py:
from flask import Flask
from config import config
from flask.ext.sqlalchemy import SQLAlchemy
from flask.ext.bootstrap import Bootstrap
db = SQLAlchemy()
bootstrap = Bootstrap()
def create_app(config_version='default'):
app = Flask(__name__)
app.config.from_object(config[config_version])
bootstrap.init_app(app)
from .main import main as main_blueprint
app.register_blueprint(main_blueprint)
db.init_app(app)
return app
app/main/init.py
from flask import Blueprint
main = Blueprint('main',__name__)
from . import routes, helper
app/main/helper.py
#!/usr/bin/env python
from . import main
from ..models import SKU, Category, db
from flask import current_app
def get_categories():
cat_list = []
for cat in db.session.query(Category).all():
cat_list.append((cat.id,cat.category))
return cat_list
Everything worked fine until I created the get_categoriesfunction in helpers.py to pull a dynamic list for a select form in app/main/forms.py. When I fireup WSGI, however, I get this error:
RuntimeError: application not registered on db instance and no application bound to current context
It would appear the db referenced in helper is not associated with an app context but when I try to create one within the function, it has not worked.
What am I doing wrong and is there a better way to organize helper functions when using Blueprints?
Documentation on database contexts here here.
My first thought was that you weren't calling db.init_app(app), but was corrected in the comments.
In app/main/helper.py, you're importing the database via from ..models import SKU, Category, db. This database object will not have been initialized with the app that you've created.
The way that I've gotten around this is by having another file, a shared.py in the root directory. In that file, create the database object,
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
In your app/init.py, don't create a new db object. Instead, do
from shared import db
db.init_app(app)
In any place that you want to use the db object, import it from shared.py. This way, the object in the shared file will have been initialized with the app context, and there's no chance of circular imports (which is one of the problems that you can run into with having the db object outside of the app-creating file).
I am using Flask-SQLAlchemy and Blueprints and I cannot help myself from using circular imports.
I know I can write imports inside functions and make it work but it sounds nasty, I'd like to confirm with the community if there is a better way to do this.
The problem is I have a module (blueprints.py) where I declare the database and import the blueprints but those blueprints need to import the database declaration at the same time.
This is the code (excerpt of the important parts):
application.apps.people.views.py
from application.blueprints import db
people = Blueprint('people', __name__,
template_folder='templates',
static_folder='static')
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True)
#people.route('/all')
def all():
users = User.query.all()
application.blueprints.py
from application.apps.people.views import people
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////tmp/test.db'
db = SQLAlchemy(app)
app.register_blueprint(people, url_prefix='/people')
I have read the documentation and the questions I found on this topic, but I still cannot find the answer I am looking for.
I have found this chapter (https://pythonhosted.org/Flask-SQLAlchemy/contexts.html) where it suggest to put the initialization code inside a method but the circular import still persist.
Edit
I fixed the problem using the pattern Application Factory
I fixed the problem with the help of the Application Factory pattern. I declare the database in a third module and configure it later in the same module in which I start the application.
This results in the following imports:
database.py → app.py
views.py → app.py
database.py → views.py
There is no circular import. It is important to make sure that the application was started and configured before calling database operations.
Here is an example application:
app.py
from database import db
from flask import Flask
import os.path
from views import User
from views import people
def create_app():
app = Flask(__name__)
app.config['DEBUG'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = "sqlite:////tmp/test.db"
db.init_app(app)
app.register_blueprint(people, url_prefix='')
return app
def setup_database(app):
with app.app_context():
db.create_all()
user = User()
user.username = "Tom"
db.session.add(user)
db.session.commit()
if __name__ == '__main__':
app = create_app()
# Because this is just a demonstration we set up the database like this.
if not os.path.isfile('/tmp/test.db'):
setup_database(app)
app.run()
database.py
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
views.py
from database import db
from flask.blueprints import Blueprint
people = Blueprint('people', __name__,
template_folder='templates',
static_folder='static')
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True)
#people.route('/')
def test():
user = User.query.filter_by(username="Tom").first()
return "Test: Username %s " % user.username
Circular imports in Flask are driving me nuts. From the docs: http://flask.pocoo.org/docs/0.10/patterns/packages/
... Be advised that this is a bad idea in general but here it is actually fine.
It is not fine. It is deeply wrong. I also consider putting any code in __init__.py as a bad practice. It makes the application harder to scale. Blueprints is a way to alleviate the problem with circular imports. I think Flask needs more of this.
I know this has been solved already, but I solved this in a slightly different way and wanted to answer in case it helps others.
Originally, my application code (e.g. my_app.py) had this line:
db = SQLAlchemy(app)
And so in my models.py, I had:
from my_app import db
class MyModel(db.Model):
# etc
hence the circular references when using MyModel in my_app.
I updated this so that models.py had this:
# models.py
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy() # note no "app" here, and no import from my_app above
class MyModel(db.Model):
# etc as before
and then in my_app.py:
# my_app.py
from models import db, MyModel # importing db is new
# ...
db.init_app(app) # call init_app here rather than initialising db here
Serge, bring out definition of models in a separate file called models.py.
Register blueprint in __init__.py file of the package.
You've got circular import because blueprint file trying to import people reference from views.py, but in views.py you're trying to import db from blueprints.py. And all of this is done at the top level of the modules.
You can make your project structure like this:
app
__init__.py # registering of blueprints and db initialization
mods
__init__.py
people
__init__.py # definition of module (blueprint)
views.py # from .models import User
models.py # from app import db
UPD:
For those who are in the tank:
people/__init__.py --> mod = Module('app.mods.people', 'people')
people/views.py --> #mod.route('/page')
app/__init__.py --> from app.mods import people; from app.mods.people import views; app.register_blueprint(people.mod, **options);