xhtml2pdf in django - issue link_callback - django-templates

I am trying to generate a pdf from an html-template in django. Therefore i use pretty much the basic methods found in the web. The issue is that in can get it to generate the content of my template, but not the images from my media directory. I always get the following error:
SuspiciousFileOperation at /manager/createpdf/
The joined path is located outside of the base path component
Since i can get some result, i assume that nothing is wrong with my view. Here is my render_to_pdf
def render_to_pdf(template_src, context_dict):
template = get_template(template_src)
html = template.render(context_dict)
result = BytesIO()
pdf = pisa.pisaDocument(BytesIO(html.encode('UTF-8')), result, link_callback=link_callback)
if not pdf.err:
return HttpResponse(result.getvalue(), content_type='application/pdf')
return None
and the link_callback:
def link_callback(uri, rel):
result = finders.find(uri)
if result:
if not isinstance(result, (list, tuple)):
result = [result]
result = list(os.path.realpath(path) for path in result)
path=result[0]
else:
sUrl = settings.STATIC_URL
sRoot = settings.STATIC_ROOT
mUrl = settings.MEDIA_URL
mRoot = settings.MEDIA_ROOT
if uri.startswith(mUrl):
path = os.path.join(mRoot, uri.replace(mUrl, ""))
elif uri.startswith(sUrl):
path = os.path.join(sRoot, uri.replace(sUrl, ""))
else:
return uri
# make sure that file exists
if not os.path.isfile(path):
raise Exception( 'media URI must start with %s or %s' % (sUrl, mUrl))
return path
I am pretty much sure that the link_callback doesnt do it's purpose. But my knowledge is to little to patch it. I also assume that i configured the media directory correctly. I can access the media files in other views/templates.
Help is very appreciated, since i spend quiet some hours on this issue... A big thx to all the are going to contribute here!

OK, i found it! In the finders.py i checked the find-method. It turns out that the find method only looks for files in the static directory and disregards the media directory. i just deleted all the lins and it works now. Here is the code:
def link_callback(uri, rel):
sUrl = settings.STATIC_URL
sRoot = settings.STATIC_ROOT
mUrl = settings.MEDIA_URL
mRoot = settings.MEDIA_ROOT
if uri.startswith(mUrl):
path = os.path.join(mRoot, uri.replace(mUrl, ""))
elif uri.startswith(sUrl):
path = os.path.join(sRoot, uri.replace(sUrl, ""))
else:
return uri
if not os.path.isfile(path):
raise Exception( 'media URI must start with %s or %s' % (sUrl, mUrl))
return path

Related

wait until the blob storoage folder is created

I would like to download a picture into a blob folder.
Before that I need to create the folder first.
Below codes are what I am doing.
The issue is the folder needs time to be created.
When it comes to with open(abs_file_name, "wb") as f:
it can not find the folder.
I am wondering whether there is an 'await' to get to know the completion of the folder creation, then do the write operation.
for index, row in data.iterrows():
url = row['Creatives']
file_name = url.split('/')[-1]
r = requests.get(url)
abs_file_name = lake_root + file_name
dbutils.fs.mkdirs(abs_file_name)
if r.status_code == 200:
with open(abs_file_name, "wb") as f:
f.write(r.content)
The final sub folder will not be created when using dbutils.fs.mkdirs() on blob storage.
It creates a file with the final sub folder name which would be considered as a directory, but it is not a directory. Look at the following demonstration:
dbutils.fs.mkdirs('/mnt/repro/s1/s2/s3.csv')
When I try to open this file, the error says that this is a directory.
This might be the issue with the code. So, try using the following code instead:
for index, row in data.iterrows():
url = row['Creatives']
file_name = url.split('/')[-1]
r = requests.get(url)
abs_file_name = lake_root + 'fail' #creates the fake directory (to counter the problem we are facing above)
dbutils.fs.mkdirs(abs_file_name)
if r.status_code == 200:
with open(lake_root + file_name, "wb") as f:
f.write(r.content)

Changes in lua language cause error in ai script

When I run script in game, I got an error message like this:
.\AI\haick.lua:104: bad argument #1 to 'find' (string expected, got nill)
local haick = {}
haick.type = type
haick.tostring = tostring
haick.require = require
haick.error = error
haick.getmetatable = getmetatable
haick.setmetatable = setmetatable
haick.ipairs = ipairs
haick.rawset = rawset
haick.pcall = pcall
haick.len = string.len
haick.sub = string.sub
haick.find = string.find
haick.seed = math.randomseed
haick.max = math.max
haick.abs = math.abs
haick.open = io.open
haick.rename = os.rename
haick.remove = os.remove
haick.date = os.date
haick.exit = os.exit
haick.time = GetTick
haick.actors = GetActors
haick.var = GetV
--> General > Seeding Random:
haick.seed(haick.time())
--> General > Finding Script Location:
local scriptLocation = haick.sub(_REQUIREDNAME, 1, haick.find(_REQUIREDNAME,'/[^\/:*?"<>|]+$'))
Last line (104 in file) causes error and I don`t know how to fix it.
There are links to .lua files below:
https://drive.google.com/file/d/1F90v-h4VjDb0rZUCUETY9684PPGw7IVG/view?usp=sharing
https://drive.google.com/file/d/1fi_wmM3rg7Ov33yM1uo7F_7b-bMPI-Ye/view?usp=sharing
Help, pls!
When you use a function in Lua, you are expected to pass valid arguments for the function.
To use a variable, you must first define it, _REQUIREDNAME in this case is not available, haick.lua file is incomplete. The fault is of the author of the file.
Lua has a very useful reference you can use if you need help, see here

nuitka fails to generate executable due to "The filename or extension is too long"

I've been trying to compile/generate a standalone executable (.exe) with nuitka but it fails every time with the message:
Nuitka:INFO:Total memory usage before running scons: 2.72 GB (2920177664 bytes):
scons: *** [main_executable.dist\main_executable.exe] The filename or extension is too long
I'm new to this programming but I think I've tried just about everything. I moved my *.py files to directory C:\main to shorten the path to no avail. I've rename the file to produce "main.exe" from "main_executable" to no avail.
My python is installed in here:
'C:\users\test\Anaconda3...'
I came across this function below to shorten the path but I have no idea how to implement it: (taken from http://code.activestate.com/recipes/286179-getshortpathname/)
Could you kindly help. Thanks.
def getShortPathName(filepath):
"Converts the given path into 8.3 (DOS) form equivalent."
import win32api, os
if filepath[-1] == "\\":
filepath = filepath[:-1]
tokens = os.path.normpath(filepath).split("\\")
if len(tokens) == 1:
return filepath
ShortPath = tokens[0]
for token in tokens[1:]:
PartPath = "\\".join([ShortPath, token])
Found = win32api.FindFiles(PartPath)
if Found == []:
raise WindowsError, 'The system cannot find the path specified: "%s"' % (PartPath)
else:
if Found[0][9] == "":
ShortToken = token
else:
ShortToken = Found[0][9]
ShortPath = ShortPath + "\\" + ShortToken
return ShortPath

Combine two TTS outputs in a single mp3 file not working

I want to combine two requests to the Google cloud text-to-speech API in a single mp3 output. The reason I need to combine two requests is that the output should contain two different languages.
Below code works fine for many language pair combinations, but unfortunately not for all. If I request e.g. a sentence in English and one in German and combine them everything works. If I request one in English and one in Japanes I can't combine the two files in a single output. The output only contains the first sentence and instead of the second sentence, it outputs silence.
I tried now multiple ways to combine the two outputs but the result stays the same. The code below should show the issue.
Please run the code first with:
python synthesize_bug.py --t1 'Hallo' --code1 de-De --t2 'August' --code2 de-De
This works perfectly.
python synthesize_bug.py --t1 'Hallo' --code1 de-De --t2 'こんにちは' --code2 ja-JP
This doesn't work. The single files are ok, but the combined files contain silence instead of the Japanese part.
Also, if used with two Japanes sentences everything works.
I already filed a bug report at Google with no response yet, but maybe it's just me who is doing something wrong here with encoding assumptions. Hope someone has an idea.
#!/usr/bin/env python
import argparse
# [START tts_synthesize_text_file]
def synthesize_text_file(text1, text2, code1, code2):
"""Synthesizes speech from the input file of text."""
from apiclient.discovery import build
import base64
service = build('texttospeech', 'v1beta1')
collection = service.text()
data1 = {}
data1['input'] = {}
data1['input']['ssml'] = '<speak><break time="2s"/></speak>'
data1['voice'] = {}
data1['voice']['ssmlGender'] = 'FEMALE'
data1['voice']['languageCode'] = code1
data1['audioConfig'] = {}
data1['audioConfig']['speakingRate'] = 0.8
data1['audioConfig']['audioEncoding'] = 'MP3'
request = collection.synthesize(body=data1)
response = request.execute()
audio_pause = base64.b64decode(response['audioContent'].decode('UTF-8'))
raw_pause = response['audioContent']
ssmlLine = '<speak>' + text1 + '</speak>'
data1 = {}
data1['input'] = {}
data1['input']['ssml'] = ssmlLine
data1['voice'] = {}
data1['voice']['ssmlGender'] = 'FEMALE'
data1['voice']['languageCode'] = code1
data1['audioConfig'] = {}
data1['audioConfig']['speakingRate'] = 0.8
data1['audioConfig']['audioEncoding'] = 'MP3'
request = collection.synthesize(body=data1)
response = request.execute()
# The response's audio_content is binary.
with open('output1.mp3', 'wb') as out:
out.write(base64.b64decode(response['audioContent'].decode('UTF-8')))
print('Audio content written to file "output1.mp3"')
audio_text1 = base64.b64decode(response['audioContent'].decode('UTF-8'))
raw_text1 = response['audioContent']
ssmlLine = '<speak>' + text2 + '</speak>'
data2 = {}
data2['input'] = {}
data2['input']['ssml'] = ssmlLine
data2['voice'] = {}
data2['voice']['ssmlGender'] = 'MALE'
data2['voice']['languageCode'] = code2 #'ko-KR'
data2['audioConfig'] = {}
data2['audioConfig']['speakingRate'] = 0.8
data2['audioConfig']['audioEncoding'] = 'MP3'
request = collection.synthesize(body=data2)
response = request.execute()
# The response's audio_content is binary.
with open('output2.mp3', 'wb') as out:
out.write(base64.b64decode(response['audioContent'].decode('UTF-8')))
print('Audio content written to file "output2.mp3"')
audio_text2 = base64.b64decode(response['audioContent'].decode('UTF-8'))
raw_text2 = response['audioContent']
result = audio_text1 + audio_pause + audio_text2
with open('result.mp3', 'wb') as out:
out.write(result)
print('Audio content written to file "result.mp3"')
raw_result = raw_text1 + raw_pause + raw_text2
with open('raw_result.mp3', 'wb') as out:
out.write(base64.b64decode(raw_result.decode('UTF-8')))
print('Audio content written to file "raw_result.mp3"')
# [END tts_synthesize_text_file]ls
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('--t1')
parser.add_argument('--code1')
parser.add_argument('--t2')
parser.add_argument('--code2')
args = parser.parse_args()
synthesize_text_file(args.t1, args.t2, args.code1, args.code2)
You can find the answer here:
https://issuetracker.google.com/issues/120687867
Short answer: It's not clear why it is not working, but Google suggests a workaround to first write the files as .wav, combine and then re-encode the result to mp3.
I have managed to do this in NodeJS with just one function (idk how optimal is it, but at least it works). Maybe you could take inspiration from it
I have used memory-streams dependency from npm
var streams = require('memory-streams');
function mergeAudios(audios) {
var reader = new streams.ReadableStream();
var writer = new streams.WritableStream();
audios.forEach(element => {
if (element instanceof streams.ReadableStream) {
element.pipe(writer)
}
else {
writer.write(element)
}
});
reader.append(writer.toBuffer())
return reader
}
Input parameter is a list which contain ReadableStream or responce.audioContent from synthesizeSpeech operation. If it is readablestream, it uses pipe operation, if it is audiocontent, it uses write method. At the end all content is passed into an readabblestream.

apollo-upload-client and graphene-django

I have a question about using apollo-upload-client and graphene-django. Here I've discovered that apollo-upload-client adding operations to formData. But here graphene-django is only trying to get query parameter. And the question is, where and how it should be fixed?
If you're referring to the data that has a header like (when viewing the HTTP from Chrome tools):
Content-Disposition: form-data; name="operations"
and data like
{"operationName":"MyMutation","variables":{"myData"....}, "query":"mutation MyMutation"...},
the graphene-python library interprets this and assembles it into a query for you, inserting the variables and removing the file data from the query. If you are using Django, you can find all of the uploaded files in info.context.FILES when writing a mutation.
Here's my solution to support the latest apollo-upload-client (8.1). I recently had to revisit my Django code when I upgraded from apollo-upload-client 5.x to 8.x. Hope this helps.
Sorry I'm using an older graphene-django but hopefully you can update the mutation syntax to the latest.
Upload scalar type (passthrough, basically):
class Upload(Scalar):
'''A file upload'''
#staticmethod
def serialize(value):
raise Exception('File upload cannot be serialized')
#staticmethod
def parse_literal(node):
raise Exception('No such thing as a file upload literal')
#staticmethod
def parse_value(value):
return value
My upload mutation:
class UploadImage(relay.ClientIDMutation):
class Input:
image = graphene.Field(Upload, required=True)
success = graphene.Field(graphene.Boolean)
#classmethod
def mutate_and_get_payload(cls, input, context, info):
with NamedTemporaryFile(delete=False) as tmp:
for chunk in input['image'].chunks():
tmp.write(chunk)
image_file = tmp.name
# do something with image_file
return UploadImage(success=True)
The heavy lifting happens in a custom GraphQL view. Basically it injects the file object into the appropriate places in the variables map.
def maybe_int(s):
try:
return int(s)
except ValueError:
return s
class CustomGraphqlView(GraphQLView):
def parse_request_json(self, json_string):
try:
request_json = json.loads(json_string)
if self.batch:
assert isinstance(request_json,
list), ('Batch requests should receive a list, but received {}.').format(
repr(request_json))
assert len(request_json) > 0, ('Received an empty list in the batch request.')
else:
assert isinstance(request_json, dict), ('The received data is not a valid JSON query.')
return request_json
except AssertionError as e:
raise HttpError(HttpResponseBadRequest(str(e)))
except BaseException:
logger.exception('Invalid JSON')
raise HttpError(HttpResponseBadRequest('POST body sent invalid JSON.'))
def parse_body(self, request):
content_type = self.get_content_type(request)
if content_type == 'application/graphql':
return {'query': request.body.decode()}
elif content_type == 'application/json':
return self.parse_request_json(request.body.decode('utf-8'))
elif content_type in ['application/x-www-form-urlencoded', 'multipart/form-data']:
operations_json = request.POST.get('operations')
map_json = request.POST.get('map')
if operations_json and map_json:
operations = self.parse_request_json(operations_json)
map = self.parse_request_json(map_json)
for file_id, f in request.FILES.items():
for name in map[file_id]:
segments = [maybe_int(s) for s in name.split('.')]
cur = operations
while len(segments) > 1:
cur = cur[segments.pop(0)]
cur[segments.pop(0)] = f
logger.info('parse_body %s', operations)
return operations
else:
return request.POST
return {}