Flask + SQLAlchemy + pytest - not rolling back my session - flask-sqlalchemy

There are several similar questions on stack overflow, and I apologize in advance if I'm breaking etiquette by asking another one, but I just cannot seem to come up with the proper set of incantations to make this work.
I'm trying to use Flask + Flask-SQLAlchemy and then use pytest to manage the session such that when the function-scoped pytest fixture is torn down, the current transation is rolled back.
Some of the other questions seem to advocate using the db "drop all and create all" pytest fixture at the function scope, but I'm trying to use the joined session, and use rollbacks, since I have a LOT of tests. This would speed it up considerably.
http://alexmic.net/flask-sqlalchemy-pytest/ is where I found the original idea, and Isolating py.test DB sessions in Flask-SQLAlchemy is one of the questions recommending using function-level db re-creation.
I had also seen https://github.com/mitsuhiko/flask-sqlalchemy/pull/249 , but that appears to have been released with flask-sqlalchemy 2.1 (which I am using).
My current (very small, hopefully immediately understandable) repo is here:
https://github.com/hoopes/flask-pytest-example
There are two print statements - the first (in example/__init__.py) should have an Account object, and the second (in test/conftest.py) is where I expect the db to be cleared out after the transaction is rolled back.
If you pip install -r requirements.txt and run py.test -s from the test directory, you should see the two print statements.
I'm about at the end of my rope here - there must be something I'm missing, but for the life of me, I just can't seem to find it.
Help me, SO, you're my only hope!

You might want to give pytest-flask-sqlalchemy-transactions a try. It's a plugin that exposes a db_session fixture that accomplishes what you're looking for: allows you to run database updates that will get rolled back when the test exits. The plugin is based on Alex Michael's blog post, with some additional support for nested transactions that covers a wider array of user cases. There are also some configuration options for mocking out connectibles in your app so you can run arbitrary methods from your codebase, too.
For test_accounts.py, you could do something like this:
from example import db, Account
class TestAccounts(object):
def test_update_view(self, db_session):
test_acct = Account(username='abc')
db_session.add(test_acct)
db_session.commit()
resp = self.client.post('/update',
data={'a':1},
content_type='application/json')
assert resp.status_code == 200
The plugin needs access to your database through a _db fixture, but since you already have a db fixture defined in conftest.py, you can set up database access easily:
#pytest.fixture(scope='session')
def _db(db):
return db
You can find detail on how to setup and installation in the docs. Hope this helps!

I'm also having issues with the rollback, my code can be found here
After reading some documentation, it seems the begin() function should be called on the session.
So in your case I would update the session fixture to this:
#pytest.yield_fixture(scope='function', autouse=True)
def session(db, request):
"""Creates a new database session for a test."""
db.session.begin()
yield db.session
db.session.rollback()
db.session.remove()
I didn't test this code, but when I try it on my code I get the following error:
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "./venv/lib/python2.7/site-packages/_pytest/main.py", line 90, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
...
INTERNALERROR> File "./venv/lib/python2.7/site-packages/_pytest/python.py", line 59, in filter_traceback
INTERNALERROR> return entry.path != cutdir1 and not entry.path.relto(cutdir2)
INTERNALERROR> AttributeError: 'str' object has no attribute 'relto'

from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
from unittest import TestCase
# global application scope. create Session class, engine
Session = sessionmaker()
engine = create_engine('postgresql://...')
class SomeTest(TestCase):
def setUp(self):
# connect to the database
self.connection = engine.connect()
# begin a non-ORM transaction
self.trans = self.connection.begin()
# bind an individual Session to the connection
self.session = Session(bind=self.connection)
def test_something(self):
# use the session in tests.
self.session.add(Foo())
self.session.commit()
def tearDown(self):
self.session.close()
# rollback - everything that happened with the
# Session above (including calls to commit())
# is rolled back.
self.trans.rollback()
# return connection to the Engine
self.connection.close()
sqlalchemy doc has solution for the case

Related

Intermittent authentication error when posting to a pubsub topic

We have a data pipeline built in Google Cloud Dataflow that consumes messages from a pubsub topic and streams them into BigQuery. In order to test that it works successfully we have some tests that run in a CI pipeline, these tests post messages onto the pubsub topic and verify that the messages are written to BigQuery successfully.
This is the code that posts to the pubsub topic:
from google.cloud import pubsub_v1
def post_messages(project_id, topic_id, rows)
futures = dict()
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path(
project_id, topic_id
)
def get_callback(f, data):
def callback(f):
try:
futures.pop(data)
except:
print("Please handle {} for {}.".format(f.exception(), data))
return callback
for row in rows:
# When you publish a message, the client returns a future. Data must be a bytestring
# ...
# construct a message in var json_data
# ...
message = json.dumps(json_data).encode("utf-8")
future = publisher.publish(
topic_path,
message
)
futures_key = str(message)
futures[futures_key] = future
future.add_done_callback(get_callback(future, futures_key))
# Wait for all the publish futures to resolve before exiting.
while futures:
time.sleep(1)
When we run this test in our CI pipeline it has started failing intermittently with error
21:38:55: AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x7f5247407220>" raised exception!
Traceback (most recent call last):
File "/opt/conda/envs/py3/lib/python3.8/site-packages/grpc/_plugin_wrapping.py", line 89, in __call__
self._metadata_plugin(
File "/opt/conda/envs/py3/lib/python3.8/site-packages/google/auth/transport/grpc.py", line 101, in __call__
callback(self._get_authorization_headers(context), None)
File "/opt/conda/envs/py3/lib/python3.8/site-packages/google/auth/transport/grpc.py", line 87, in _get_authorization_headers
self._credentials.before_request(
File "/opt/conda/envs/py3/lib/python3.8/site-packages/google/auth/credentials.py", line 134, in before_request
self.apply(headers)
File "/opt/conda/envs/py3/lib/python3.8/site-packages/google/auth/credentials.py", line 110, in apply
_helpers.from_bytes(token or self.token)
File "/opt/conda/envs/py3/lib/python3.8/site-packages/google/auth/_helpers.py", line 130, in from_bytes
raise ValueError("***0!r*** could not be converted to unicode".format(value))
ValueError: None could not be converted to unicode
Error: The operation was canceled.
Unfortunately this only fails in our CI pipeline, and even then it is failing intermittently (only fails on a small percentage of all CI pipeline runs). If I run the same test locally it succeeds every time. When running in the CI pipeline the code is authenticating as a service account whereas when I run it locally it is authenticating as myself
I know from the error message that it is failing on this code:
if isinstance(result, six.text_type):
return result
else:
raise ValueError("{0!r} could not be converted to unicode".format(value))
https://github.com/googleapis/google-auth-library-python/blob/3c3fbf40b07e090f2be7fac5b304dbf438b5cd6c/google/auth/_helpers.py#L127-L130
which is in a python library from google that we install using pip.
Clearly the expression:
isinstance(result, six.text_type)
is evaluating to False. I put a breakpoint on that code when I ran it locally and discovered that under normal circumstances (i.e. when it works) the value of result is something like this:
That looks like some sort of auth token.
Given the error message:
ValueError: None could not be converted to unicode
it seems that whatever action is being undertaken by the google authentication libraries it is passing None through to the code shown above.
I am at the bounds of my knowledge here. Given this is only failing in a CI pipeline I don't have the opportunity to put a breakpoint in my code and debug it. Given the call stack in the error message this is something to do with authentication.
I'm hoping someone can advise on a course of action.
Can anyone explain a means by which I can discover why None is being passed through to the code that is raising an error?
We had the same error. Finally solved it by using a JSON Web Token for authentication per Google's Quckstart. Like so:
import json
from google.cloud import pubsub_v1
from google.auth import jwt
def post_messages(credentials_path, topic, list_of_messages):
credentials_dict = json.load(open(credentials_path,'r'))
audience = "https://pubsub.googleapis.com/google.pubsub.v1.Publisher"
credentials_ob = jwt.Credentials.from_service_account_info(
credentials_dict, audience=audience
)
publisher = pubsub_v1.PublisherClient(credentials=credentials_ob)
for message_dict in list_of_message_dicts:
message = json.dumps(message_dict, default=str).encode("utf-8")
future = publisher.publish(topic, message)
We also updated our environment but it didn't fix the ValueError until we changed to jwt. Here's the environment in any case:
google-api-core==2.4.0
google-api-python-client==2.36.0
google-auth==2.3.2
google-auth-httplib2==0.1.0
google-auth-oauthlib==0.4.6
google-cloud-core==2.1.0
google-cloud-pubsub==2.9.0
Tried the jwt solution above and though it solved the issue, it drastically degraded my write throughput.
Offering another work around that solved this issue for me.
My GOOGLE_APPLICATION_CREDENTIALS env var was set to the location of my key-file. Instead, unset that env variable and, at the start of your process, run
gcloud auth activate-service-account {account_name} --key-file {location_of_key_file}
This allows the google auth to bypass a key file and use the default service account set up (which is now the original, intended service account). Works with normal throughput and zero errors. :)

Multiple mongoDB related to same django rest framework project

We are having one django rest framework (DRF) project which should have multiple databases (mongoDB).Each databases should be independed. We are able to connect to one database, but when we are going to another DB for writing connection is happening but data is storing in DB which is first connected.
We changed default DB and everything but no changes.
(Note : Solution should be apt for the usage of serializer. Because we need to use DynamicDocumentSerializer in DRF-mongoengine.
Thanks in advance.
While running connect() just assign an alias for each of your databases and then for each Document specify a db_alias parameter in meta that points to a specific database alias:
settings.py:
from mongoengine import connect
connect(
alias='user-db',
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
connect(
alias='book-db'
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
models.py:
from mongoengine import Document
class User(Document):
name = StringField()
meta = {'db_alias': 'user-db'}
class Book(Document):
name = StringField()
meta = {'db_alias': 'book-db'}
I guess, I finally get what you need.
What you could do is write a really simple middleware that maps your url schema to the database:
from mongoengine import *
class DBSwitchMiddleware:
"""
This middleware is supposed to switch the database depending on request URL.
"""
def __init__(self, get_response):
# list all the mongoengine Documents in your project
import models
self.documents = [item for in dir(models) if isinstance(item, Document)]
def __call__(self, request):
# depending on the URL, switch documents to appropriate database
if request.path.startswith('/main/project1'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db1'
elif request.path.startswith('/main/project2'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db2'
# delegate handling the rest of response to your views
response = get_response(request)
return response
Note that this solution might be prone to race conditions. We're modifying a Documents globally here, so if one request was started and then in the middle of its execution a second request is handled by the same python interpreter, it will overwrite document.cls._meta['db_alias'] setting and first request will start writing to the same database, which will break your database horribly.
Same python interpreter is used by 2 request handlers, if you're using multithreading. So with this solution you can't start your server with multiple threads, only with multiple processes.
To address the threading issues, you can use threading.local(). If you prefer context manager approach, there's also a contextvars module.

Persistent connection in twisted

I'm new in Twisted and have one question. How can I organize a persistent connection in Twisted? I have a queue and every second checks it. If have some - send on client. I can't find something better than call dataReceived every second.
Here is the code of Protocol implementation:
class SyncProtocol(protocol.Protocol):
# ... some code here
def dataReceived(self, data):
if(self.orders_queue.has_new_orders()):
for order in self.orders_queue:
self.transport.write(str(order))
reactor.callLater(1, self.dataReceived, data) # 1 second delay
It works how I need, but I'm sure that it is very bad solution. How can I do that in different way (flexible and correct)? Thanks.
P.S. - the main idea and alghorithm:
1. Client connect to server and wait
2. Server checks for update and pushes data to client if anything changes
3. Client do some operations and then wait for other data
Without knowing how the snippet you provided links into your internet.XXXServer or reactor.listenXXX (or XXXXEndpoint calls), its hard to make head-or-tails of it, but...
First off, in normal use, a twisted protocol.Protocol's dataReceived would only be called by the framework itself. It would be linked to a client or server connection directly or via a factory and it would be automatically called as data comes into the given connection. (The vast majority of twisted protocols and interfaces (if not all) are interrupt based, not polling/callLater, thats part of what makes Twisted so CPU efficient)
So if your shown code is actually linked into Twisted via a Server or listen or Endpoint to your clients then I think you will find very bad things will happen if your clients ever send data (... because twisted will call dataReceived for that, which (among other problems) would add extra reactor.callLater callbacks and all sorts of chaos would ensue...)
If instead, the code isn't linked into twisted connection framework, then your attempting to reuse twisted classes in a space they aren't designed for (... I guess this seems unlikely because I don't know how non-connection code would learn of a transport, unless your manually setting it...)
The way I've been build building models like this is to make a completely separate class for the polling based I/O, but after I instantiate it, I push my client-list (server)factory into the polling instance (something like mypollingthing.servfact = myserverfactory) there-by making a way for my polling logic to be able to call into the clients .write (or more likely a def I built to abstract to the correct level for my polling logic)
I tend to take the examples in Krondo's Twisted Introduction as one of the canonical examples of how to do twisted (other then twisted matrix), and the example in part 6, under "Client 3.0" PoetryClientFactory has a __init__ that sets a callback in the factory.
If I try blend that with the twistedmatrix chat example and a few other things, I get:
(You'll want to change sendToAll to whatever your self.orders_queue.has_new_orders() is about)
#!/usr/bin/python
from twisted.internet import task
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, ServerFactory
class PollingIOThingy(object):
def __init__(self):
self.sendingcallback = None # Note I'm pushing sendToAll into here in main
self.iotries = 0
def pollingtry(self):
self.iotries += 1
print "Polling runs: " + str(self.iotries)
if self.sendingcallback:
self.sendingcallback("Polling runs: " + str(self.iotries) + "\n")
class MyClientConnections(Protocol):
def connectionMade(self):
print "Got new client!"
self.factory.clients.append(self)
def connectionLost(self, reason):
print "Lost a client!"
self.factory.clients.remove(self)
class MyServerFactory(ServerFactory):
protocol = MyClientConnections
def __init__(self):
self.clients = []
def sendToAll(self, message):
for c in self.clients:
c.transport.write(message)
def main():
client_connection_factory = MyServerFactory()
polling_stuff = PollingIOThingy()
# the following line is what this example is all about:
polling_stuff.sendingcallback = client_connection_factory.sendToAll
# push the client connections send def into my polling class
# if you want to run something ever second (instead of 1 second after
# the end of your last code run, which could vary) do:
l = task.LoopingCall(polling_stuff.pollingtry)
l.start(1.0)
# from: https://twistedmatrix.com/documents/12.3.0/core/howto/time.html
reactor.listenTCP(5000, client_connection_factory)
reactor.run()
if __name__ == '__main__':
main()
To be fair, it might be better to inform PollingIOThingy of the callback by passing it as an arg to it's __init__ (that is what is shown in Krondo's docs), For some reason, I tend to miss connections like this when I read code and find class-cheating easier to see, but that may just by my personal brain-damage.

Renaming an Amazon CloudWatch Alarm

I'm trying to organize a large number of CloudWatch alarms for maintainability, and the web console grays out the name field on an edit. Is there another method (preferably something scriptable) for updating the name of CloudWatch alarms? I would prefer a solution that does not require any programming beyond simple executable scripts.
Here's a script we use to do this for the time being:
import sys
import boto
def rename_alarm(alarm_name, new_alarm_name):
conn = boto.connect_cloudwatch()
def get_alarm():
alarms = conn.describe_alarms(alarm_names=[alarm_name])
if not alarms:
raise Exception("Alarm '%s' not found" % alarm_name)
return alarms[0]
alarm = get_alarm()
# work around boto comparison serialization issue
# https://github.com/boto/boto/issues/1311
alarm.comparison = alarm._cmp_map.get(alarm.comparison)
alarm.name = new_alarm_name
conn.update_alarm(alarm)
# update actually creates a new alarm because the name has changed, so
# we have to manually delete the old one
get_alarm().delete()
if __name__ == '__main__':
alarm_name, new_alarm_name = sys.argv[1:3]
rename_alarm(alarm_name, new_alarm_name)
It assumes you're either on an ec2 instance with a role that allows this, or you've got a ~/.boto file with your credentials. It's easy enough to manually add yours.
Unfortunately it looks like this is not currently possible.
I looked around for the same solution but it seems neither console nor cloudwatch API provides that feature.
Note:
But we can copy the existing alram with the same parameter and can save on new name
.

getting result from a function running "deferToThread"

I have recently started working on twisted not much familiar with its functions.I have a problem related to "deferToThread" method...my code is here to use this method
from twisted.internet.threads import deferToThread
from twisted.internet import reactor
results=[]
class Tool(object):
def exectool(self,tool):
# print "Test Class Exec tool running..........."
exec tool
return
def getResult(self,tool):
return results.append(deferToThread(self.exectool, tool))
to=Tool()
to.getResult(tools)
f=open(temp).read()
obj_tool=compile(f, 'a_filename', 'exec')
[ at 0x8ce7020, file "a_filename", line 1>, at 0x8cd4e30, file "a_filename", line 2>]
I am passing tools one by one in getResults() method it executs successfully & prints the results what script written in the file objects.
I have to store the result of tools executing in some variable so that I can save it in database.How to achieve this cause when i call re=to.getResult(tools) and print "re" it prints none.
I HAVE TO STORE ITS RESULTS IN DATABASE? IS THERE SOMETHING I CAN DO?
thanx in advance
There are two problems here.
First, deferToThread will not work if you never start the reactor. Hopefully this code snippet was actually extracted from a larger Twisted-using application where the reactor is running, so that won't be an actual problem for you. But you shouldn't expect this snippet to work unless you add a reactor.run() call to it.
Second, deferToThread returns a Deferred. The Deferred fires with the result of the callable you passed in. This is covered in the API documentation. Many APIs in Twisted return a Deferred, so you might want to read the documentation covering them. Once you understand how they work and how to use them, lots of things should be quite a bit easier.