How can I dynamically generate pytest parametrized fixtures from imported helper methods? - dynamic

What I want to achieve is basically this but with a class scoped, parametrized fixture.
The problem is that if I import the methods (generate_fixture and inject_fixture) from a helper file the inject fixture code seems to be getting called too late. Here is a complete, working code sample:
# all of the code in one file
import pytest
import pytest_check as check
def generate_fixture(params):
#pytest.fixture(scope='class', params=params)
def my_fixture(request, session):
request.cls.param = request.param
print(params)
return my_fixture
def inject_fixture(name, someparam):
globals()[name] = generate_fixture(someparam)
inject_fixture('myFixture', 'cheese')
#pytest.mark.usefixtures('myFixture')
class TestParkingInRadius:
def test_custom_fixture(self):
check.equal(True, self.param, 'Sandwhich')
If I move the generate and inject helpers into their own file (without changing them at all) I get a fixture not found error i.e. if the test file looks like this instead:
import pytest
import pytest_check as check
from .helpers import inject_fixture
inject_fixture('myFixture', 'cheese')
#pytest.mark.usefixtures('myFixture')
class TestParkingInRadius:
def test_custom_fixture(self):
check.equal(True, self.param, 'Sandwhich')
The I get an error at setup: E fixture 'myFixture' not found followed by a list of available fixtures (which doesn't include the injected fixture).
Could someone help explain why this is happening? Having to define those functions in every single test file sort of defeats the whole point of doing this (keeping things DRY).

I figured out the problem.
Placing the inject fixture method in a different file changes the global scope of that method. The reason it works inside the same file is both the caller and inject fixture method share the same global scope.
Using the native inspect package and getting the scope of the caller solved the issue, here it is with full boiler plate working code including class introspection via the built in request fixture:
import inspect
import pytest
def generate_fixture(scope, params):
#pytest.fixture(scope=scope, params=params)
def my_fixture(request):
request.cls.param = request.param
print(request.param)
return my_fixture
def inject_fixture(name, scope, params):
"""Dynamically inject a fixture at runtime"""
# we need the caller's global scope for this hack to work hence the use of the inspect module
caller_globals = inspect.stack()[1][0].f_globals
# for an explanation of this trick and why it works go here: https://github.com/pytest-dev/pytest/issues/2424
caller_globals[name] = generate_fixture(scope, params)

Related

Python architecture, correct way to pass configurable initialization object to project modules

I have a big architecture question about how to pass set of configurable / replaceable objects to modules of my project?
For example the set may be a bot, logger, database, etc.
Currently I'm just importing they, it make a big problem when you want to replace them during a tests.
Lets' say import app.bot will hard to test and patch
I had tried a multiple solutions but failed with them:
Option 1:
Create some base class with which will accepts set of such objects (db, bot, etc).
Every logic class (who need this set) will inherit this class.
AFAIK the similar approach there in SqlAlchemy ORM.
So the code will looks like:
app.config.py:
Class Config:
db: DB
...
tests.py:
import app.config
Config.db = Mock()
create_app.py:
import app.config
def create_app(db):
app.config.Config.db = db
logic.py
import app.config
User(app.config.Config):
def handle_text(text):
self.db.save_text(text=text)
...
The problem with this case is that most likely you can't importing as from app.config import Config
because it will lead to wrong behavior and this is implicit restriction.
Option 2
Pass this set in __init__ arguments to every instance.
(It's ma be a problem if app has many classes, > 20 like in my app).
User:
def __init__(..., config: ProductionConfig):
...
Option 3
In many backend frameworks (flask for example) there are context object.
Well we can inject our config into this context during initialization.
usage.py:
my_handler(update, context):
context.user.handle_text(text=update.text, db=context.db)
The problem with this approach is that every time we need to pass context to access a database in our logic.
Option 4
Create config by condition and import it directly.
This solution may be bad because conditions increases code complexity.
I'm following rule "namespace preferable over conditions".
app.config.py:
db = get_test_db() if DEBUG else get_production_db()
bot = Mock() if DEBUG else get_production_bot()
P.S. this question isn't "opinion based" because in some point the wrong solutions will leads to bad design and bugs therefore.

Access Pytest result in teardown of Appium test

my conftest file for my appium/python test framework looks like:
#pytest.fixture()
def setup(request):
desired_caps = {
...
}
request.cls.driver = webdriver.Remote(
command_executor= "https://"blah.com",
desired_capabilities= desired_caps
)
yield request.cls.driver
request.cls.driver.quit()
And what I am trying to do is be able to access pytest results from within the 'yield' section, and send a pass/fail result to BrowserStack using the command:
driver.execute_script('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"passed", "reason": "All elements located and assertions passed!"}}')
The problem is, the only method I know to access the pytest results utilizes a hook in conftest, i.e:
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.when == "call" and result.passed:
do_something
if result.when == "call" and result.failed:
do_something_else
But how do I integrate these too? In other words, how can I take the result from the hook to get the test result, and then use the driver instance from the setup to run the execute_script command. Everything I have tried leads to issues with not being able to access the Appium driver instance. Please help!!
Update:
I have achieved this by using a global variable in the hook to save the result, and then in the fixture I use this data to send the corresponding message, but I know this is not ideal. So the question remains, how can I store a variable from the hook in conftest that gets the pytest result, and pass that to the yield section of the setup fixture?

How can I override a pyinvoke previously defined rule, but "super()" call the older implementation?

I have a standard_tasks.py module that provides a task for build
#task
def build(ctx):
do_this()
from my tasks.py I am currently doing
from standard_tasks import *
_super_build = build.body
#task
def _build(ctx):
# Here I want to call the older implementation that triggered do_this()
_super_build(ctx)
do_that()
build.body = _build
However, this feels clunky. I was wondering if there's a better way?

Pytest fixture finalizer TypeError 'NoneType' object is not callable

I have a simple pytest fixture to ensure that the test data file is present (and deleted at the end of the test), but if gives me the error described in the title.
#pytest.fixture
def ensure_test_data_file(request):
data_file = server.DATA_FILE_NAME
with open(data_file, 'w') as text_file:
text_file.write(json.dumps(TEST_DATA))
text_file.close()
print(os.path.abspath(data_file))
request.addfinalizer(os.remove(data_file))
If I remove the finalizer, it works (except that the file is not deleted). Am I doing something wrong?
You need to pass a function object to request.addfinalizer - what you're doing is actually calling os.remove(data_file), which returns None, and thus you're doing request.addfinalizer(None).
Here you'd use request.addfinalizer(lambda: os.remove(data_file)) or request.addfinalizer(functools.partial(os.remove, data_file)) to get a callable with the argument already "applied", but which isn't actually called.
However, I'd recommend using yield in the fixture instead (docs), which makes this much cleaner by letting you "pause" your fixture and run the test in between:
#pytest.fixture
def ensure_test_data_file(request):
data_file = server.DATA_FILE_NAME
with open(data_file, 'w') as text_file:
text_file.write(json.dumps(TEST_DATA))
text_file.close()
print(os.path.abspath(data_file))
yield
os.remove(data_file)

Flask + SQLAlchemy + pytest - not rolling back my session

There are several similar questions on stack overflow, and I apologize in advance if I'm breaking etiquette by asking another one, but I just cannot seem to come up with the proper set of incantations to make this work.
I'm trying to use Flask + Flask-SQLAlchemy and then use pytest to manage the session such that when the function-scoped pytest fixture is torn down, the current transation is rolled back.
Some of the other questions seem to advocate using the db "drop all and create all" pytest fixture at the function scope, but I'm trying to use the joined session, and use rollbacks, since I have a LOT of tests. This would speed it up considerably.
http://alexmic.net/flask-sqlalchemy-pytest/ is where I found the original idea, and Isolating py.test DB sessions in Flask-SQLAlchemy is one of the questions recommending using function-level db re-creation.
I had also seen https://github.com/mitsuhiko/flask-sqlalchemy/pull/249 , but that appears to have been released with flask-sqlalchemy 2.1 (which I am using).
My current (very small, hopefully immediately understandable) repo is here:
https://github.com/hoopes/flask-pytest-example
There are two print statements - the first (in example/__init__.py) should have an Account object, and the second (in test/conftest.py) is where I expect the db to be cleared out after the transaction is rolled back.
If you pip install -r requirements.txt and run py.test -s from the test directory, you should see the two print statements.
I'm about at the end of my rope here - there must be something I'm missing, but for the life of me, I just can't seem to find it.
Help me, SO, you're my only hope!
You might want to give pytest-flask-sqlalchemy-transactions a try. It's a plugin that exposes a db_session fixture that accomplishes what you're looking for: allows you to run database updates that will get rolled back when the test exits. The plugin is based on Alex Michael's blog post, with some additional support for nested transactions that covers a wider array of user cases. There are also some configuration options for mocking out connectibles in your app so you can run arbitrary methods from your codebase, too.
For test_accounts.py, you could do something like this:
from example import db, Account
class TestAccounts(object):
def test_update_view(self, db_session):
test_acct = Account(username='abc')
db_session.add(test_acct)
db_session.commit()
resp = self.client.post('/update',
data={'a':1},
content_type='application/json')
assert resp.status_code == 200
The plugin needs access to your database through a _db fixture, but since you already have a db fixture defined in conftest.py, you can set up database access easily:
#pytest.fixture(scope='session')
def _db(db):
return db
You can find detail on how to setup and installation in the docs. Hope this helps!
I'm also having issues with the rollback, my code can be found here
After reading some documentation, it seems the begin() function should be called on the session.
So in your case I would update the session fixture to this:
#pytest.yield_fixture(scope='function', autouse=True)
def session(db, request):
"""Creates a new database session for a test."""
db.session.begin()
yield db.session
db.session.rollback()
db.session.remove()
I didn't test this code, but when I try it on my code I get the following error:
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "./venv/lib/python2.7/site-packages/_pytest/main.py", line 90, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
...
INTERNALERROR> File "./venv/lib/python2.7/site-packages/_pytest/python.py", line 59, in filter_traceback
INTERNALERROR> return entry.path != cutdir1 and not entry.path.relto(cutdir2)
INTERNALERROR> AttributeError: 'str' object has no attribute 'relto'
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
from unittest import TestCase
# global application scope. create Session class, engine
Session = sessionmaker()
engine = create_engine('postgresql://...')
class SomeTest(TestCase):
def setUp(self):
# connect to the database
self.connection = engine.connect()
# begin a non-ORM transaction
self.trans = self.connection.begin()
# bind an individual Session to the connection
self.session = Session(bind=self.connection)
def test_something(self):
# use the session in tests.
self.session.add(Foo())
self.session.commit()
def tearDown(self):
self.session.close()
# rollback - everything that happened with the
# Session above (including calls to commit())
# is rolled back.
self.trans.rollback()
# return connection to the Engine
self.connection.close()
sqlalchemy doc has solution for the case