How to mock function call in flask-restul resource method - testing

I developed an API using Flask-restful. I have an API with resource named 'Server'. This resource has a method get to process request to '/server' url.
In this method I have a call method of another class 'Connector' that get data from another service:
class Server(Resource):
def get(self):
...
status, body = connector.get_servers(page, size) # call method of another class
...
return body, status
I want to test developed API. I wrote some tests:
from application import create_app
from unittest import TestCase
class TestServerResource(TestCase):
def setUp(self):
self.app = create_app()
self.client = self.app.test_client
def test_bad_url(self):
res = self.client().get('/server')
self.assertEqual(res.status_code, 400)
# Test of get method Server resources described above
def test_pagination(self):
res = self.client().get('/server?page=1&size=1') # request to my API
self.assertEqual(res.status_code, 200)
In method 'test_pagination' I am testing method 'get' of my resource, but call of method of another class is in this method. Therefore I have a question: how I can mock call of 'connector.get_servers()' in test?
Thanks.

I have found a solution.
To mock method call in other method we can user 'patch' decorator from unittest.mock
For example described below this will look in the following way:
from unittest.mock import patch
# Test of get method Server resources described above
#patch('path_to_method_we_want_to_mock.method')
def test_pagination(self, mock):
mock.return_value = <new value> # set value which mocked method return
res = self.client().get('/server?page=1&size=1') # request to my API
self.assertEqual(res.status_code, 200)
Now in method get() calling of get_servers method will return mock.return_value.
Also using of some patch decorators is possible:
#patch('application.servers_connector.ServersConnector.get_server_by_id')
#patch('application.rent_connector.RentConnector.get_rents_for_user')
def test_rent_for_user(self, rent_mock, server_mock):
...

Related

How to make Locust test my API endpoint served with FastAPI?

I have an API served with FastAPI working on:
http://127.0.0.1:8000/predictions
And I want to test it using Locust. My code:
from locust import HttpUser, TaskSet, task
import json
class UserBehavior(TaskSet):
#task(1)
def create_post(self):
headers = {'content-type': 'application/json','Accept-Encoding':'gzip'}
self.client.post("/predictions",data= json.dumps({
"text": "I am tired",
}),
headers=headers,
name = "Create a new post")
class WebsiteUser(HttpUser):
task=[UserBehavior]
I get this error msg while locust is running:
2022-07-23 16:33:32,764] pop-os/ERROR/locust.user.task: No tasks defined on WebsiteUser. use the #task decorator or set the tasks property of the User (or mark it as abstract = True if you only intend to subclass it)
Traceback (most recent call last):
File "/home/statspy/anaconda3/lib/python3.9/site-packages/locust/user/task.py", line 340, in run
self.schedule_task(self.get_next_task())
File "/home/statspy/anaconda3/lib/python3.9/site-packages/locust/user/task.py", line 472, in get_next_task
raise Exception(
Exception: No tasks defined on WebsiteUser. use the #task decorator or set the tasks property of the User (or mark it as abstract = True if you only intend to subclass it)
How can I fix it?
Thanks
Its tasks=[UserBehavior] , not task=[UserBehavior]

Trigger errback when process_exception() is called in Middleware

Using Scrapy i'm implementing a CrawlSpider which will scrape all kinds of websites and hence, sometimes very slow ones which will produce a timeout eventually.
My problem is that if such a twisted.internet.error.TimeoutError occurs, i want to trigger the errback of my spider. I don't want to raise this exception and i also don't want to return a dummy Response object which may would suggest that scraping was successful.
Note that i was already able to made all of this work, but only using a "dirty" workaround:
myspider.py (excerpt)
class MySpider(CrawlSpider):
name = 'my-spider'
rules = (
Rule(
link_extractor=LinkExtractor(unique=True),
callback='_my_callback', follow=True
),
)
def parse_start_url(self, response):
# (...)
def errback(self, failure):
log.warning('Failed scraping following link: {}'
.format(failure.request.url))
middlewares.py (excerpt)
from twisted.internet.error import DNSLookupError, TimeoutError
# (...)
class MyDownloaderMiddleware(object):
#classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_request(self, request, spider):
return None
def process_response(self, request, response, spider):
return response
def process_exception(self, request, exception, spider):
if (isinstance(exception, TimeoutError)
or (isinstance(exception, DNSLookupError))):
# just 2 examples of errors i want to catch
# set status=500 to enforce errback() call
return Response(request.url, status=500)
Settings should be fine with my custom Middleware already enabled.
Now as you can see by using return Response(request.url, status=500) i can trigger my errback() function in MySpider as desired. However, the status code 500 is very misleading because it's not only incorrect but technically i never receive any response at all.
So my question is, how can i trigger my errback() function trough DownloaderMiddleware.process_exception() in a clean way?
EDIT: I quickly figured it out that for similar exceptions like DNSLookupError i want to have the same behaviour in place. I've updated the coding snippets accordingly.
I didn't find it in the docs, but looking at the source I find DownloaderMiddleware.process_exception() can return twisted.python.failure.Failure objects as well as Request or Response objects.
This means you can return a Failure object to be handled by the errback by wrapping the exception in the Failure object.
This is cleaner than creating a fake Response object, see an example Middleware implementation that does this here: https://github.com/miguelsimon/site2graph/blob/master/site2graph/middlewares.py
The core idea:
from twisted.python.failure import Failure
class MyDownloaderMiddleware:
def process_exception(self, request, exception, spider):
return Failure(exception)
The __init__ method of the Rule class accepts a process_request parameter that you can use to attatch an errback to a request:
class MySpider(CrawlSpider):
name = 'my-spider'
rules = (
Rule(
# …
process_request='process_request',
),
)
def process_request(self, request, response):
return request.replace(errback=self.errback)
def errback(self, failure):
pass

Scrapy : How to write a UserAgentMiddleware?

I want to write a UserAgentMiddleware for scrapy,
the docs says:
Middleware that allows spiders to override the default user agent.
In order for a spider to override the default user agent, its user_agent attribute must be set.
docs:
https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#module-scrapy.downloadermiddlewares.useragent
But there is no a example,I have no ideas how to write it.
Any suggestions?
You look at it in install scrapy path
/Users/tarun.lalwani/.virtualenvs/project/lib/python3.6/site-packages/scrapy/downloadermiddlewares/useragent.py
"""Set User-Agent header per spider or use a default value from settings"""
from scrapy import signals
class UserAgentMiddleware(object):
"""This middleware allows spiders to override the user_agent"""
def __init__(self, user_agent='Scrapy'):
self.user_agent = user_agent
#classmethod
def from_crawler(cls, crawler):
o = cls(crawler.settings['USER_AGENT'])
crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)
return o
def spider_opened(self, spider):
self.user_agent = getattr(spider, 'user_agent', self.user_agent)
def process_request(self, request, spider):
if self.user_agent:
request.headers.setdefault(b'User-Agent', self.user_agent)
You can see a below example for setting Random user agent
https://github.com/alecxe/scrapy-fake-useragent/blob/master/scrapy_fake_useragent/middleware.py
First visit some website and get some of the newest user agents. Then in your standard middleware do something like this. This is the same place you would setup your own proxy settings. Grab a random UA from the text file, and put it in the headers. This is sloppy to show an example you would want to import random at the top and also make sure to closer useragents.txt when you are done with it. I would probably just load them into a list at the top of the document.
class GdataDownloaderMiddleware(object):
#classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_request(self, request, spider):
# Called for each request that goes through the downloader
# middleware.
user_agents = open('useragents.txt', 'r')
user_agents = user_agents.readlines()
import random
user_agent = random.choice(user_agents)
request.headers.setdefault(b'User-Agent', user_agent)
# Must either:
# - return None: continue processing this request
# - or return a Response object
# - or return a Request object
# - or raise IgnoreRequest: process_exception() methods of
# installed downloader middleware will be called
return None
def process_response(self, request, response, spider):
# Called with the response returned from the downloader.
# Must either;
# - return a Response object
# - return a Request object
# - or raise IgnoreRequest
return response
def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middleware) raises an exception.
# Must either:
# - return None: continue processing this exception
# - return a Response object: stops process_exception() chain
# - return a Request object: stops process_exception() chain
pass
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)

How to pass response to a spider without fetching a web page?

The scrapy documentation specifically mentions that I should use downloader middleware if I want to pass a response to a spider without actually fetching the web page. However, I can't find any documentation or examples on how to achieve this functionality.
I am interested in passing only the url to the request callback, populate an item's file_urls field with the url (and certain permutations thereof), and use the FilesPipeline to handle the actual download.
How can a write a downloader middleware class that passes the url to the spider while avoiding downloading the web page?
You can return Response object in downloader middleware's process_request() method. This method is called for every request your spider yields.
Something like:
class NoDownloadMiddleware(object):
def process_request(self, request, spider):
# only process marked requests
if not request.meta.get('only_download'):
return
# now make Response object however you wish
response = Response(request.url)
return response
and in your spider:
def parse(self, response):
yield Request(some_url, meta={'only_download':True})
and in your settings.py activate the middleware:
DOWNLOADER_MIDDLEWARES = {
'myproject.middlewares.NoDownloadMiddleware': 543,
}

HttpResponseRedirect' object has no attribute 'client'

Django 1.9.6
I'd like to write some unit test for checking redirection.
Could you help me understand what am I doing wrongly here.
Thank you in advance.
The test:
from django.test import TestCase
from django.core.urlresolvers import reverse
from django.http.request import HttpRequest
from django.contrib.auth.models import User
class GeneralTest(TestCase):
def test_anonymous_user_redirected_to_login_page(self):
user = User(username='anonymous', email='vvv#mail.ru', password='ttrrttrr')
user.is_active = False
request = HttpRequest()
request.user = user
hpv = HomePageView()
response = hpv.get(request)
self.assertRedirects(response, reverse("auth_login"))
The result:
ERROR: test_anonymous_user_redirected_to_login_page (general.tests.GeneralTest)
Traceback (most recent call last):
File "/home/michael/workspace/photoarchive/photoarchive/general/tests.py", line 44, in test_anonymous_user_redirected_to_login_page
self.assertRedirects(response, reverse("auth_login"))
File "/home/michael/workspace/venvs/photoarchive/lib/python3.5/site-packages/django/test/testcases.py", line 326, in assertRedirects
redirect_response = response.client.get(path, QueryDict(query),
AttributeError: 'HttpResponseRedirect' object has no attribute 'client'
Ran 3 tests in 0.953s
What pdb says:
-> self.assertRedirects(response, reverse("auth_login"))
(Pdb) response
<HttpResponseRedirect status_code=302, "text/html; charset=utf-8", url="/accounts/login/">
You need to add a client to the response object. See the updated code below.
from django.test import TestCase, Client
from django.core.urlresolvers import reverse
from django.http.request import HttpRequest
from django.contrib.auth.models import User
class GeneralTest(TestCase):
def test_anonymous_user_redirected_to_login_page(self):
user = User(username='anonymous', email='vvv#mail.ru', password='ttrrttrr')
user.is_active = False
request = HttpRequest()
request.user = user
hpv = HomePageView()
response = hpv.get(request)
response.client = Client()
self.assertRedirects(response, reverse("auth_login"))
Looks like you are directly calling your view's get directly rather than using the built-in Client. When you use the test client, you get your client instance back in the response, presumably for cases such as this where you want to check/fetch a redirect.
One solution would be to use the client to fetch the response from your view. Another is to stick a client in the response as mentioned above.
A third option is tell assertRedirects not to fetch the redirect. There is no need for client if you don't ask the assertion to fetch the redirect. That's done by adding fetch_redirect_response=False to your assertion.