I have a Flask app with a function that is something like this:
def get_user(user_id):
user = User.query.filter(id == user_id).first()
data = user.to_dict()
return data
I am using pytest and I have two fixtures:
#pytest.fixture(scope='module')
def test_app():
app = create_app()
app.config.from_object('application.config.TestingConfig')
with app.app_context():
yield app
#pytest.fixture(scope='module')
def test_database():
db.create_all()
yield db
db.session.remove()
db.drop_all()
In my test file, I am importing the get_user() function. And in the test function I am passing the test_app and test_database fixtures.
I add a user to the test database successfully, but when I call the function it tries to access the dev database, rather than the testing database.
I don't know how to get the function to connect to the test_database. Can someone point me in the right direction.
Related
my conftest file for my appium/python test framework looks like:
#pytest.fixture()
def setup(request):
desired_caps = {
...
}
request.cls.driver = webdriver.Remote(
command_executor= "https://"blah.com",
desired_capabilities= desired_caps
)
yield request.cls.driver
request.cls.driver.quit()
And what I am trying to do is be able to access pytest results from within the 'yield' section, and send a pass/fail result to BrowserStack using the command:
driver.execute_script('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"passed", "reason": "All elements located and assertions passed!"}}')
The problem is, the only method I know to access the pytest results utilizes a hook in conftest, i.e:
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.when == "call" and result.passed:
do_something
if result.when == "call" and result.failed:
do_something_else
But how do I integrate these too? In other words, how can I take the result from the hook to get the test result, and then use the driver instance from the setup to run the execute_script command. Everything I have tried leads to issues with not being able to access the Appium driver instance. Please help!!
Update:
I have achieved this by using a global variable in the hook to save the result, and then in the fixture I use this data to send the corresponding message, but I know this is not ideal. So the question remains, how can I store a variable from the hook in conftest that gets the pytest result, and pass that to the yield section of the setup fixture?
I need to start sending notifications to a TG group, before that I want to run a function continuosly which would query an API and store data in DB. While this function is running I would want to be able to send notifications if they are available in the DB:
That's my code:
import telegram
from telegram.ext import Updater,CommandHandler, JobQueue
token = "tokenno:token"
bot = telegram.Bot(token=token)
def start(update, context):
context.bot.send_message(chat_id=update.message.chat_id,
text="You will now receive msgs!")
def callback_minute(context):
chat_id = context.job.context
# Check in DB and send if new msgs exist
send_msgs_tg(context, chat_id)
def callback_msgs():
fetch_msgs()
def main():
JobQueue.run_repeating(callback_msgs, interval=5, first=1, context=None)
updater = Updater(token,use_context=True)
dp = updater.dispatcher
dp.add_handler(CommandHandler("start",start, pass_job_queue=True))
updater.start_polling()
updater.idle()
if __name__ == '__main__':
main()
This code gives me error:
TypeError: run_repeating() missing 1 required positional argument: 'callback'
Any help would greatly appreciated
There are a few issues with your code, let me try to point them out:
1.
def callback_msgs(): fetch_msgs()
You use callback_msgs as callback for your job. But job callbacks take exactly one argument of type telegram.ext.CallbackContext.
JobQueue.run_repeating(callback_msgs, interval=5, first=1, context=None)
JobQueue is a class. To use run_repeating, which is an instance method, you'll need an instance of that class. In fact the Updater already builds an instance for you, it's available as updater.job_queue in your case. So the call should look like this:
updater.job_queue.run_repating(callback_msgs, interval=5, first=1, context=None)
CommandHandler("start",start, pass_job_queue=True)
This is not strictly speaking an issue, bot pass_job_queue=True has no effect at all, because you use use_context=True
Please note that there is a nice tutorial on JobQueue over at the ptb-wiki. There is also an example on how to use it.
Disclaimer: I'm currently the maintainer of python-telegram-bot
There are several similar questions on stack overflow, and I apologize in advance if I'm breaking etiquette by asking another one, but I just cannot seem to come up with the proper set of incantations to make this work.
I'm trying to use Flask + Flask-SQLAlchemy and then use pytest to manage the session such that when the function-scoped pytest fixture is torn down, the current transation is rolled back.
Some of the other questions seem to advocate using the db "drop all and create all" pytest fixture at the function scope, but I'm trying to use the joined session, and use rollbacks, since I have a LOT of tests. This would speed it up considerably.
http://alexmic.net/flask-sqlalchemy-pytest/ is where I found the original idea, and Isolating py.test DB sessions in Flask-SQLAlchemy is one of the questions recommending using function-level db re-creation.
I had also seen https://github.com/mitsuhiko/flask-sqlalchemy/pull/249 , but that appears to have been released with flask-sqlalchemy 2.1 (which I am using).
My current (very small, hopefully immediately understandable) repo is here:
https://github.com/hoopes/flask-pytest-example
There are two print statements - the first (in example/__init__.py) should have an Account object, and the second (in test/conftest.py) is where I expect the db to be cleared out after the transaction is rolled back.
If you pip install -r requirements.txt and run py.test -s from the test directory, you should see the two print statements.
I'm about at the end of my rope here - there must be something I'm missing, but for the life of me, I just can't seem to find it.
Help me, SO, you're my only hope!
You might want to give pytest-flask-sqlalchemy-transactions a try. It's a plugin that exposes a db_session fixture that accomplishes what you're looking for: allows you to run database updates that will get rolled back when the test exits. The plugin is based on Alex Michael's blog post, with some additional support for nested transactions that covers a wider array of user cases. There are also some configuration options for mocking out connectibles in your app so you can run arbitrary methods from your codebase, too.
For test_accounts.py, you could do something like this:
from example import db, Account
class TestAccounts(object):
def test_update_view(self, db_session):
test_acct = Account(username='abc')
db_session.add(test_acct)
db_session.commit()
resp = self.client.post('/update',
data={'a':1},
content_type='application/json')
assert resp.status_code == 200
The plugin needs access to your database through a _db fixture, but since you already have a db fixture defined in conftest.py, you can set up database access easily:
#pytest.fixture(scope='session')
def _db(db):
return db
You can find detail on how to setup and installation in the docs. Hope this helps!
I'm also having issues with the rollback, my code can be found here
After reading some documentation, it seems the begin() function should be called on the session.
So in your case I would update the session fixture to this:
#pytest.yield_fixture(scope='function', autouse=True)
def session(db, request):
"""Creates a new database session for a test."""
db.session.begin()
yield db.session
db.session.rollback()
db.session.remove()
I didn't test this code, but when I try it on my code I get the following error:
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "./venv/lib/python2.7/site-packages/_pytest/main.py", line 90, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
...
INTERNALERROR> File "./venv/lib/python2.7/site-packages/_pytest/python.py", line 59, in filter_traceback
INTERNALERROR> return entry.path != cutdir1 and not entry.path.relto(cutdir2)
INTERNALERROR> AttributeError: 'str' object has no attribute 'relto'
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
from unittest import TestCase
# global application scope. create Session class, engine
Session = sessionmaker()
engine = create_engine('postgresql://...')
class SomeTest(TestCase):
def setUp(self):
# connect to the database
self.connection = engine.connect()
# begin a non-ORM transaction
self.trans = self.connection.begin()
# bind an individual Session to the connection
self.session = Session(bind=self.connection)
def test_something(self):
# use the session in tests.
self.session.add(Foo())
self.session.commit()
def tearDown(self):
self.session.close()
# rollback - everything that happened with the
# Session above (including calls to commit())
# is rolled back.
self.trans.rollback()
# return connection to the Engine
self.connection.close()
sqlalchemy doc has solution for the case
I am new at python, I just learnt how to create an api using flask restless and flask sql-alchemy. I however would like to seed the database with random values. How do I achieve this? Please help.
Here is the api code...
import flask
import flask.ext.sqlalchemy
import flask.ext.restless
import datetime
DATABASE = 'sqlite:///tmp/test.db'
#Create the Flask application and the FLask-SQLALchemy object
app = flask.Flask(__name__)
app.config ['DEBUG'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = DATABASE
db = flask.ext.sqlalchemy.SQLAlchemy(app)
#create Flask-SQLAlchemy models
class TodoItem(db.Model):
id = db.Column(db.Integer, primary_key = True)
todo = db.Column(db.Unicode)
priority = db.Column(db.SmallInteger)
due_date = db.Column(db.Date)
#Create database tables
db.create_all()
#Create Flask restless api manager
manager = flask.ext.restless.APIManager(app, flask_sqlalchemy_db = db)
#Create api end points
manager.create_api(TodoItem, methods = ['GET','POST','DELETE','PUT'], results_per_page = 20)
#Start flask loop
app.run()
I had a similar question and did some research, found something that worked.
The pattern I am seeing is based on registering a Flask CLI custom command, something like: flask seed.
This would look like this given your example. First, import the following into your api code file (let's say you have it named server.py):
from flask.cli import with_appcontext
(I see you do import flask but I would just add you should change these to from flask import what_you_need)
Next, create a function that does the seeding for your project:
#with_appcontext
def seed():
"""Seed the database."""
todo1 = TodoItem(...).save()
todo2 = TodoItem(...).save()
todo3 = TodoItem(...).save()
Finally, register these command with your flask application:
def register_commands(app):
"""Register CLI commands."""
app.cli.add_command(seed)
After you've configured you're application, make sure you call register_commands to register the commands:
register_commands(app)
At this point, you should be able to run: flask seed. You can add more functions (maybe a flask reset) using the same pattern.
From another newbie, the forgerypy and forgerypy3 libraries are available for this purpose (though they look like they haven't been touched in a bit).
A simple example of how to use them by adding them to your model:
class TodoItem(db.Model):
....
#staticmethod
def generate_fake_data(records=10):
import forgery_py
from random import randint
for record in records:
todo = TodoItem(todo=forgery_py.lorem_ipsum.word(),
due_date=forgery_py.date.date(),
priority=randint(1,4))
db.session.add(todo)
try:
db.session.commit()
except:
db.session.rollback()
You would then call the generate_fake_data method in a shell session.
And Miguel Grinberg's Flask Web Development (the O'Reilly book, not blog) chapter 11 is a good resource for this.
Does anyone knows how to load initial data for auth.User using sql fixtures?
For my models, I just got have a < modelname >.sql file in a folder named sql that syncdb does it's job beautifully. But I have no clue how to do it for the auth.User model. I've googled it, but with no success.
Thanks in advance,
Aldo
For SQL fixtures, you'd have to specifically have insert statements for the auth tables. You can find the schema of the auth tables with the command python manage.py sql auth.
The much easier and database-independent way (unless you have some additional SQL magic you want to run), is to just make a JSON or YAML fixture file in the fixtures directory of your app with data like this:
- model: auth.user
pk: 100000
fields:
first_name: Admin
last_name: User
username: admin
password: "<a hashed password>"
You can generate a hashed password quickly in a django shell
>>> from django.contrib.auth.models import User
>>> u = User()
>>> u.set_password('newpass')
>>> u.password
'sha1$e2fd5$96edae9adc8870fd87a65c051e7fdace6226b5a8'
This will get loaded whenever you run syncdb.
You are looking for loaddata:
manage.py loadata path/to/your/fixtureFile
But I think the command can only deal with files in XML, YAML, Python or JSON format (see here). To create such appropriate files, have a look at the dumpdata method.
Thanks for your answers. I've found the solution that works for me, and for coincidence was one of Brian's suggestion. Here it is:
Firs I disconnected the signal that created the Super User after syncdb, for I have my super user in my auth_user fixture:
models.py:
from django.db.models import signals
from django.contrib.auth.management import create_superuser
from django.contrib.auth import models as auth_app
signals.post_syncdb.disconnect(
create_superuser,
sender=auth_app,
dispatch_uid = "django.contrib.auth.management.create_superuser")
Then I created a signal to be called after syncdb:
< myproject >/< myapp >/management/__init__.py
"""
Loads fixtures for files in sql/<modelname>.sql
"""
from django.db.models import get_models, signals
from django.conf import settings
import <myproject>.<myapp>.models as auth_app
def load_fixtures(app, **kwargs):
import MySQLdb
db=MySQLdb.connect(host=settings.DATABASE_HOST or "localhost", \
user=settings.DATABASE_USER,
passwd=settings.DATABASE_PASSWORD, port=int(settings.DATABASE_PORT or 3306))
cursor = db.cursor()
try:
print "Loading fixtures to %s from file %s." % (settings.DATABASE_NAME, \
settings.FIXTURES_FILE)
f = open(settings.FIXTURES_FILE, 'r')
cursor.execute("use %s;" % settings.DATABASE_NAME)
for line in f:
if line.startswith("INSERT"):
try:
cursor.execute(line)
except Exception, strerror:
print "Error on loading fixture:"
print "-- ", strerror
print "-- ", line
print "Fixtures loaded"
except AttributeError:
print "FIXTURES_FILE not found in settings. Please set the FIXTURES_FILE in \
your settings.py"
cursor.close()
db.commit()
db.close()
signals.post_syncdb.connect(load_fixtures, sender=auth_app, \
dispatch_uid = "<myproject>.<myapp>.management.load_fixtures")
And in my settings.py I added FIXTURES_FILE with the path to my .sql file with the sql dump.
One thing that I still haven't found is how to fire this signal only after the tables are created, and not everytime syncdb is fired. A temporary work around for this is use INSERT IGNORE INTO in my sql command.
I know this solution is far from perfect, and critics/improvements/opinions are very welcome!
Regards,
Aldo
There is a trick for this: (tested on Django 1.3.1)
Solution:
python manage.py startapp auth_fix
mkdir auth_fix/fixtures
python manage.py dumpdata auth > auth_fixtures/fixtures/initial_data.json
Include auth_fix in INSTALLED_APPS inside settings.py
Next time you run python manage.py syncdb, Django will load the auth fixture automatically.
Explanation:
Just make an empty app to hold the fixtures folder. Leave __init__py, models.py and views.py in it so that Django recognizes it as an app and not just a folder.
Make the fixtures folder in the app.
python manage.py dumpdata auth will dump the "auth" data in the DB with all the Groups and Users information. The rest of the command simply redirects the output into a file called "initial_data.json" which is the one that Django looks for when you run "syncdb".
Just include auth_fix in INSTALLED_APPS inside settings.py.
This example shows how to do it in JSON but you can basically use the format of your choice.
If you happen to be doing database migrations with south, creating users is very simple.
First, create a bare data migration. It needs to be included in some application. If you have a common app where you place shared code, that would be a good choice. If you have an app where you concentrate user-related code, that would be even better.
$ python manage.py datamigration <some app name> add_users
The pertinent migration code might look something like this:
# encoding: utf-8
import datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
from django.contrib.auth.models import User
class Migration(DataMigration):
users = [
{
'username': 'nancy',
'email': 'nancy#example.com',
'password': 'nancypassword',
'staff': True,
'superuser': True
},
{
'username': 'joe',
'email': '',
'password': 'joepassword',
'staff': True,
'superuser': False
},
{
'username': 'susan',
'email': 'susan#example.com',
'password': 'susanpassword',
'staff': False,
'superuser': False
}
]
def forwards(self, orm):
"""
Insert User objects
"""
for i in Migration.users:
u = User.objects.create_user(i['username'], i['email'], i['password'])
u.is_staff = i['staff']
u.is_superuser = i['superuser']
u.save()
def backwards(self, orm):
"""
Delete only these users
"""
for i in Migration.users:
User.objects.filter(username=i['username']).delete()
Then simply run the migration and the auth users should be inserted.
$ python manage.py migrate <some app name>
An option is to import your auth.User SQL manually and subsequently dump it out to a standard Django fixture (name it initial_data if you want syncdb to find it). You can generally put this file into any app's fixtures dir since the fixtured data will all be keyed with the proper app_label. Or you can create an empty/dummy app and place it there.
Another option is to override the syncdb command and apply the fixture in a manner as you see fit.
I concur with Felix that there is no non-trivial natural hook in Django for populating contrib apps with SQL.
I simply added SQL statements into the custom sql file for another model. I chose my Employee model because it depends on auth_user.
The custom SQL I wrote actually reads from my legacy application and pulls user info from it, and uses REPLACE rather than INSERT (I'm using MySQL) so I can run it whenever I want.
And I put that REPLACE...SELECT statement in a procedure so that it's easy to run manually or scheduled with cron.