apollo-upload-client and graphene-django - file-upload

I have a question about using apollo-upload-client and graphene-django. Here I've discovered that apollo-upload-client adding operations to formData. But here graphene-django is only trying to get query parameter. And the question is, where and how it should be fixed?

If you're referring to the data that has a header like (when viewing the HTTP from Chrome tools):
Content-Disposition: form-data; name="operations"
and data like
{"operationName":"MyMutation","variables":{"myData"....}, "query":"mutation MyMutation"...},
the graphene-python library interprets this and assembles it into a query for you, inserting the variables and removing the file data from the query. If you are using Django, you can find all of the uploaded files in info.context.FILES when writing a mutation.

Here's my solution to support the latest apollo-upload-client (8.1). I recently had to revisit my Django code when I upgraded from apollo-upload-client 5.x to 8.x. Hope this helps.
Sorry I'm using an older graphene-django but hopefully you can update the mutation syntax to the latest.
Upload scalar type (passthrough, basically):
class Upload(Scalar):
'''A file upload'''
#staticmethod
def serialize(value):
raise Exception('File upload cannot be serialized')
#staticmethod
def parse_literal(node):
raise Exception('No such thing as a file upload literal')
#staticmethod
def parse_value(value):
return value
My upload mutation:
class UploadImage(relay.ClientIDMutation):
class Input:
image = graphene.Field(Upload, required=True)
success = graphene.Field(graphene.Boolean)
#classmethod
def mutate_and_get_payload(cls, input, context, info):
with NamedTemporaryFile(delete=False) as tmp:
for chunk in input['image'].chunks():
tmp.write(chunk)
image_file = tmp.name
# do something with image_file
return UploadImage(success=True)
The heavy lifting happens in a custom GraphQL view. Basically it injects the file object into the appropriate places in the variables map.
def maybe_int(s):
try:
return int(s)
except ValueError:
return s
class CustomGraphqlView(GraphQLView):
def parse_request_json(self, json_string):
try:
request_json = json.loads(json_string)
if self.batch:
assert isinstance(request_json,
list), ('Batch requests should receive a list, but received {}.').format(
repr(request_json))
assert len(request_json) > 0, ('Received an empty list in the batch request.')
else:
assert isinstance(request_json, dict), ('The received data is not a valid JSON query.')
return request_json
except AssertionError as e:
raise HttpError(HttpResponseBadRequest(str(e)))
except BaseException:
logger.exception('Invalid JSON')
raise HttpError(HttpResponseBadRequest('POST body sent invalid JSON.'))
def parse_body(self, request):
content_type = self.get_content_type(request)
if content_type == 'application/graphql':
return {'query': request.body.decode()}
elif content_type == 'application/json':
return self.parse_request_json(request.body.decode('utf-8'))
elif content_type in ['application/x-www-form-urlencoded', 'multipart/form-data']:
operations_json = request.POST.get('operations')
map_json = request.POST.get('map')
if operations_json and map_json:
operations = self.parse_request_json(operations_json)
map = self.parse_request_json(map_json)
for file_id, f in request.FILES.items():
for name in map[file_id]:
segments = [maybe_int(s) for s in name.split('.')]
cur = operations
while len(segments) > 1:
cur = cur[segments.pop(0)]
cur[segments.pop(0)] = f
logger.info('parse_body %s', operations)
return operations
else:
return request.POST
return {}

Related

How to extract the [Documentation] text from Robot framework test case

I am trying to extract the content of the [Documentation] section as a string for comparision with other part in a Python script.
I was told to use Robot framework API https://robot-framework.readthedocs.io/en/stable/
to extract but I have no idea how.
However, I am required to work with version 3.1.2
Example:
*** Test Cases ***
ATC Verify that Sensor Battery can enable and disable manufacturing mode
[Documentation] E1: This is the description of the test 1
... E2: This is the description of the test 2
[Tags] E1 TRACE{Trace_of_E1}
... E2 TRACE{Trace_of_E2}
Extract the string as
E1: This is the description of the test 1
E2: This is the description of the test 2
Have a look at these examples. I did something similar to generate testplans descritio. I tried to adapt my code to your requirements and this could maybe work for you.
import os
import re
from robot.api.parsing import (
get_model, get_tokens, Documentation, EmptyLine, KeywordCall,
ModelVisitor, Token
)
class RobotParser(ModelVisitor):
def __init__(self):
# Create object with remarkup_text to store formated documentation
self.text = ''
def get_text(self):
return self.text
def visit_TestCase(self, node):
# The matched `TestCase` node is a block with `header` and
# `body` attributes. `header` is a statement with familiar
# `get_token` and `get_value` methods for getting certain
# tokens or their value.
for keyword in node.body:
# skip empty lines
if keyword.get_value(Token.DOCUMENTATION) == None:
continue
self.text += keyword.get_value(Token.ARGUMENT)
def visit_Documentation(self,node):
# The matched "Documentation" node with value
self.remarkup_text += node.value + self.new_line
def visit_File(self, node):
# Call `generic_visit` to visit also child nodes.
return self.generic_visit(node)
if __name__ == "__main__":
path = "../tests"
for filename in os.listdir(path):
if re.match(".*\.robot", filename):
model = get_model(os.path.join(path, filename))
robot_parser = RobotParser()
robot_parser.visit(model)
text=robot_parser._text()
The code marked as best answer didn't quite work for me and has a lot of redundancy but it inspired me enough to get into the parsing and write it in a much readable and efficient way that actually works as is. You just have to have your own way of generating & iterating through filesystem where you call the get_robot_metadata(filepath) function.
from robot.api.parsing import (get_model, ModelVisitor, Token)
class RobotParser(ModelVisitor):
def __init__(self):
self.testcases = {}
def visit_TestCase(self, node):
testcasename = (node.header.name)
self.testcases[testcasename] = {}
for section in node.body:
if section.get_value(Token.DOCUMENTATION) != None:
documentation = section.value
self.testcases[testcasename]['Documentation'] = documentation
elif section.get_value(Token.TAGS) != None:
tags = section.values
self.testcases[testcasename]['Tags'] = tags
def get_testcases(self):
return self.testcases
def get_robot_metadata(filepath):
if filepath.endswith('.robot'):
robot_parser = RobotParser()
model = get_model(filepath)
robot_parser.visit(model)
metadata = robot_parser.get_testcases()
return metadata
This function will be able to extract the [Documentation] section from the testcase:
def documentation_extractor(testcase):
documentation = []
for setting in testcase.settings:
if len(setting) > 2 and setting[1].lower() == "[documentation]":
for doc in setting[2:]:
if doc.startswith("#"):
# the start of a comment, so skip rest of the line
break
documentation.append(doc)
break
return "\n".join(documentation)

Karate: How can we retrieve the value from called feature file

I have two parameter in feature file A and I pass those values to another feature file called B.
But I am unable to retrieve the values as expected in Feature file B
CODE:
Feature File A:
And def Response = response
And def token = response.metaData.paging.token
And def totalPages = response.metaData.paging.totalPages
* def xyz =
"""
function(times){
for(currentPage=1;currentPage<=times;currentPage++){
karate.log('Run test round: '+(currentPage));
karate.call('ABC.feature', {getToken:token, page:currentPage});
}
java.lang.Thread.sleep(1*1000);
}
"""
* call xyz totalPages
Feature File B:
* def token = '#(getToken)'
* def currentPage = '#(page)'
But the output was
#getToken
#page
What would be the best way? to these values for further utilization.
Try this:
* def token = getToken
* def currentPage = page
Here's another thing, any variable defined in the calling feature will be visible, e.g. token so most of the time you don't need to pass arguments:
* print token
* print totalPages
Please avoid JS for loops as far as possible: https://github.com/intuit/karate#loops - and actually you seem to have missed the data-driven testing approach that Karate recommends: https://github.com/intuit/karate#the-karate-way
If you are still stuck, please follow this process: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

insert_many in pymongo not persisting

I'm having some issues with persisting documents with pymongo when using insert_many.
I'm handing over a list of dicts to insert_many and it works fine from inside the same script that does the inserting. Less so once the script has finished.
def row_to_doc(row):
rowdict = row.to_dict()
for key in rowdict:
val = rowdict[key]
if type(val) == float or type(val) == np.float64:
if np.isnan(val):
# If we want a SQL style document collection
rowdict[key] = None
# If we want a NoSQL style document collection
# del rowdict[key]
return rowdict
def dataframe_to_collection(df):
n = len(df)
doc_list = []
for k in range(n):
doc_list.append(row_to_doc(df.iloc[k]))
return doc_list
def get_mongodb_client(host="localhost", port=27017):
return MongoClient(host, port)
def create_collection(client):
db = client["material"]
return db["master-data"]
def add_docs_to_mongo(collection, doc_list):
collection.insert_many(doc_list)
def main():
client = get_mongodb_client()
csv_fname = "some_csv_fname.csv"
df = get_clean_csv(csv_fname)
doc_list = dataframe_to_collection(df)
collection = create_collection(client)
add_docs_to_mongo(collection, doc_list)
test_doc = collection.find_one({"MATERIAL": "000000000000000001"})
When I open up another python REPL and start looking through the client.material.master_data collection with collection.find_one({"MATERIAL": "000000000000000001"}) or collection.count_documents({}) I get None for the find_one and 0 for the count_documents.
Is there a step where I need to call some method to persist the data to disk? db.collection.save() in the mongo client API sounds like what I need but it's just another way of inserting documents from what I have read. Any help would be greatly appreciated.
The problem was that I was getting my collection via client.db_name.collection_name and it wasn't getting the same collection I was creating with my code. client.db_name["collection-name"] solved my issue. Weird.

How to pass the json list response of one feature file as parameter to another feature file

My requirement is, I want to pass the response of first feature file as input to second feature file. The first feature file response is a json list, so the expectation is second feature file should be called for each value of json list.
Feature:
Scenario: identify the reference account
* def initTestData = read('../inputData.feature')
* def accountData = $initTestData.response
* print “Account Details”+accountData // output of this is a json list [“SB987658”,”SB984345”]
* def reqRes = karate.call('../Request.feature', { accountData : accountData })
In Request.feature file we are constructing the url dynamically
Given url BaseUrl + '/account/'+'#(accountId)' - here am facing issue http://10.34.145.126/account/[“SB987658”,”SB984345”]
My requirement is Request.feature should be called for each value in ‘accountData’ Json list
I have tried:
* def resAccountList = karate.map(accountData, function(x){accountId.add(x) })
* def testcaseDetails = call read('../requests/scenarios.feature') resAccountList.accountId
Result is same, accountId got replaced as [“SB987658”,”SB984345”]
my I need to call Request.feature twice http://10.34.145.126/account/SB987658 http://10.34.145.126/account/SB984345 and use the response of each call to the subsequent feature file calls.
I think you have a mistake in karate.map() look at the below example:
* def array = ['SB987658', 'SB984345']
* def data = karate.map(array, function(x){ return { value: x } })
* def result = call read('called.feature') data
And called.feature is:
Feature:
Scenario:
Given url 'https://httpbin.org'
And path 'anything', value
When method get
Then status 200
Which makes 2 requests:
https://httpbin.org/anything/SB987658
https://httpbin.org/anything/SB984345

A WSGI replacement for CGIXMLRPCRequestHandler?

It is well understood that forking a process for running Python, as CGI does, is slower than embedding Python, as WSGI does. I would like to implement an XML-RPC interface using the SimpleXMLRPCServer included in the standard Python library and I already have an implementation that works via CGI. I believe there should be a faster way. I'd like to try WSGI but first I need a request handler for WSGI and there does not appear to be one in SimpleXMLRPCServer already. Am I all wet or is there no equivalent of this in the standard library under Python 2.6, 2.7, 3.x?
Here is my initial implementation of a WSGI replacement for CGIXMLRPCRequestHandler:
from xmlrpclib import SimpleXMLRPCDispatcher
class WSGIXMLRPCRequestHandler(SimpleXMLRPCDispatcher):
"""Simple handler for XML-RPC data passed through WSGI."""
def __init__(self, allow_none = False, encoding = None):
SimpleXMLRPCDispatcher.__init__(self, allow_none, encoding)
def __call__(self, environ, start_response):
"""Parse and handle a single XML-RPC request"""
result = []
method = environ['REQUEST_METHOD']
headers = [('Content-type', 'text/html')]
if method != 'POST':
# Default implementation indicates an error because XML-RPC uses the POST method.
code = 400
message, explain = BaseHTTPServer.BaseHTTPRequestHandler.responses[code]
status = '%d %s' % (code, message)
if method == 'HEAD':
response = ''
else:
response = BaseHTTPServer.DEFAULT_ERROR_MESSAGE % {'code' : code, 'message' : message, 'explain' : explain}
else:
# Dispatch XML-RPC to implementation
status = '200 OK'
request = environ['wsgi.input'].read(int(environ['CONTENT_LENGTH']))
response = self._marshaled_dispatch(request)
length = len(response)
if length > 0:
result = [response]
headers.append(('Content-length', str(length)))
start_response(status, headers)
return result
Take a look at this Hope it'll help.