How to return in-memory PIL image from WSGI application - mod-wsgi

I've read a lot of posts like this one that detail how to dynamically return an image using WSGI. However, all the examples I've seen are opening an image in binary format, reading it and then returning that data (this works for me fine).
I'm stuck trying to achieve the same thing using an in-memory PIL image object. I don't want to save the image to a file since I already have an image in-memory.
Given this:
fd = open( aPath2Png, 'rb')
base = Image.open(fd)
... lots more image processing on base happens ...
I've tried this:
data = base.tostring()
response_headers = [('Content-type', 'image/png'), ('Content-length', len(data))]
start_response(status, response_headers)
return [data]
WSGI will return this to the client fine. But I will receive an error for the image saying there was something wrong with the image returned.
What other ways are there?

See Image.save(). It can take a file object in which case you can write it to a StringIO instance. Thus something like:
output = StringIO.StringIO()
base.save(output, format='PNG')
return [output.getvalue()]
You will need to check what values you can use for format.

Related

Reading *.cdpg file with python without knowing structure

I am trying to use python to read a .cdpg file. It was generated by the labview code. I do not have access to any information about the structure of the file. Using another post I have had some success, but the numbers are not making any sense. I do not know if my code is wrong or if my interpretation of the data is wrong.
The code I am using is:
import struct
with open(file, mode='rb') as file: # b is important -> binary
fileContent = file.read()
ints = struct.unpack("i" * ((len(fileContent) -24) // 4), fileContent[20:-4])
print(ints)
The file is located here. Any guidance would be greatly appreciated.
Thank you,
T
According to the documentation here https://www.ni.com/pl-pl/support/documentation/supplemental/12/logging-data-with-national-instruments-citadel.html
The .cdpg files contain trace data. Citadel stores data in a
compressed format; therefore, you cannot read and extract data from
these files directly. You must use the Citadel API in the DSC Module
or the Historical Data Viewer to access trace data. Refer to the
Citadel Operations section for more information about retrieving data
from a Citadel database.
.cdpg is a closed format containing compressed data. You won't be able to interpret them properly not knowing the file format structure. You can read the raw binary content and this is what you're actually doing with your example Python code

OutOfMemory on custom extractor

I have stitched a lot of small XML files into one file, and then made a custom extractor to return rows with one byte array that corresponds to each file.
Run on remote/master
Run it for one file (gzipped, 11Mb), it works fine.
Run it for more than one file, I get a System.OutOfMemoryException.
Run on local/master
Run it for one or more files (gzipped 500+ Mbs), works fine.
Extractor looks like this:
public override IEnumerable<IRow> Extract(IUnstructuredReader input, IUpdatableRow output)
{
using (var stream = new StreamReader(input.BaseStream))
{
var xml = stream.ReadToEnd();
// Clean stiched XML
xml = UtilsXml.CleanXml(xml);
// Get nodes - one for each stiched file
var d = new XmlDocument();
d.LoadXml(xml);
var root = d.FirstChild;
for (int i = 0; i < root.ChildNodes.Count; i++)
{
output.Set<object>(1, Encoding.ASCII.GetBytes(root.ChildNodes[i].OuterXml.ToString()));
yield return output.AsReadOnly();
}
yield break;
}
}
and error message looks like this:
==== Caught exception System.OutOfMemoryException
at System.Xml.XmlDocument.CreateTextNode(String text)
at System.Xml.XmlLoader.LoadAttributeNode()
at System.Xml.XmlLoader.LoadNode(Boolean skipOverWhitespace)
at System.Xml.XmlLoader.LoadDocSequence(XmlDocument parentDoc)
at System.Xml.XmlDocument.Load(XmlReader reader)
at System.Xml.XmlDocument.LoadXml(String xml)
at Microsoft.Analytics.Tools.Formats.Text.XmlByteArrayRowExtractor.<Extract>d__0.MoveNext()
at ScopeEngine.SqlIpExtractor<ScopeEngine::GZipInput,Extract_0_Data0>.GetNextRow(SqlIpExtractor<ScopeEngine::GZipInput\,Extract_0_Data0>* , Extract_0_Data0* output) in d:\data\ccs\jobs\bc367467-ef86-43d2-a937-46ba2d4cc524_v0\sqlmanaged.h:line 1924
So what am I doing wrong? And how do I debug this on remote?
Thanks!
Unfortunately local run does not enforce memory allocations, so you would have to check memory in local vertex debug yourself.
Looking at your code above, I see that you are loading XML documents into a DOM. Please note that an XML DOM can explode the data size from the string representation up to a factor of 10 or more (I have seen 2 to 12 in my times as the resident SQL XML guru).
Each UDO today only gets 1/2 GB of RAM to play with. So what I assume is that your XML DOM document(s) start going beyond that.
The recommendation normally is that you use the XMLReader interface (there is a reader extractor in the samples on http://usql.io as well) and scan through the document(s) to find the information you are looking for.
If your documents are always small enough (e.g., <20MB), you may want to make sure that you release the memory of the other documents and operate one document at a time.
We do have plans to allow you to annotate your UDO with memory needs, but that is still a bit out.

Using Leigh version of S3Wrapper.cfc Can't get past Init

I am new to S3 and need to use it for image storage. I found a half dozen versions of an s2wrapper for cf but it appears that the only one set of for v4 is one modified by Leigh
https://gist.github.com/Leigh-/26993ed79c956c9309a9dfe40f1fce29
Dropped in the com directory and created a "test" page that contains the following code:
s3 = createObject('component','com.S3Wrapper').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
but got the following error :
So I changed the line 37 from
variables.Sv4Util = createObject('component', 'Sv4').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
to
variables.Sv4Util = createObject('component', 'Sv4Util').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
Now I am getting:
I feel like going through Leigh code and start changing things is a bad idea since I have lurked here for year an know Leigh's code is solid.
Does any know if there are any examples on how to use this anywhere? If not what I am doing wrong. If it makes a difference I am using Lucee 5 and not Adobe's CF engine.
UPDATE :
I followed Leigh's directions and the error is now gone. I am addedsome more code to my test page which now looks like this :
<cfscript>
s3 = createObject('component','com.S3v4').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
bucket = "imgbkt.domain.com";
obj = "fake.ping";
region = "s3-us-west-1"
test = s3.getObject(bucket,obj,region);
writeDump(test);
test2 = s3.getObjectLink(bucket,obj,region);
writeDump(test2);
writeDump(s3);
</cfscript>
Regardless of what I put in for bucket, obj or region I get :
JIC I did go to AWS and get new keys:
Leigh if you are still around or anyone how has used one of the s3Wrappers any suggestions or guidance?
UPDATE #2:
Even after Alex's help I am not able to get this to work. The Link I receive from getObjectLink is not valid and getObject never does download an object. I thought I would try the putObject method
test3 = s3.putObject(bucketName=bucket,regionName=region,keyName="favicon.ico");
writeDump(test3);
to see if there is any additional information, I received this :
I did find this article https://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html but it is pretty old and since S3 specifically suggests using dots in bucketnames I don't that it is relevant any longer. There is obviously something I am doing wrong but I have spent hours trying to resolve this and I can't seem to figure out what it might be.
I will give you a rundown of what the code does:
getObjectLink returns a HTTP URL for the file fake.ping that is found looking in the bucket imgbkt.domain.com of region s3-us-west-1. This link is temporary and expires after 60 seconds by default.
getObject invokes getObjectLink and immediately requests the URL using HTTP GET. The response is then saved to the directory of the S3v4.cfc with the filename fake.ping by default. Finally the function returns the full path of the downloaded file: E:\wwwDevRoot\taa\fake.ping
To save the file in a different location, you would invoke:
downloadPath = 'E:\';
test = s3.getObject(bucket,obj,region,downloadPath);
writeDump(test);
The HTTP request is synchronous, meaning the file will be downloaded completely when the functions returns the filepath.
If you want to access the actual content of the file, you can do this:
test = s3.getObject(bucket,obj,region);
contentAsString = fileRead(test); // returns the file content as string
// or
contentAsBinary = fileReadBinary(test); // returns the content as binary (byte array)
writeDump(contentAsString);
writeDump(contentAsBinary);
(You might want to stream the content if the file is large since fileRead/fileReadBinary reads the whole file into buffer. Use fileOpen to stream the content.
Does that help you?

Using a text file as Spark streaming source for testing purpose

I want to write a test for my spark streaming application that consume a flume source.
http://mkuthan.github.io/blog/2015/03/01/spark-unit-testing/ suggests using ManualClock but for the moment reading a file and verifying outputs would be enough for me.
So I wish to use :
JavaStreamingContext streamingContext = ...
JavaDStream<String> stream = streamingContext.textFileStream(dataDirectory);
stream.print();
streamingContext.awaitTermination();
streamingContext.start();
Unfortunately it does not print anything.
I tried:
dataDirectory = "hdfs://node:port/absolute/path/on/hdfs/"
dataDirectory = "file://C:\\absolute\\path\\on\\windows\\"
adding the text file in the directory BEFORE the program begins
adding the text file in the directory WHILE the program run
Nothing works.
Any suggestion to read from text file?
Thanks,
Martin
Order of start and await are indeed inversed.
In addition to that, the easiest way to pass data to your Spark Streaming application for testing is a QueueDStream. It's a mutable queue of RDD of arbitrary data. This means that you could create the data programmatically or load it from disk into an RDD and pass that to your Spark Streaming code.
Eg. to avoid the timing issues faced with the fileConsumer, you could try this:
val rdd = sparkContext.textFile(...)
val rddQueue: Queue[RDD[String]] = Queue()
rddQueue += rdd
val dstream = streamingContext.queueStream(rddQueue)
doMyStuffWithDstream(dstream)
streamingContext.start()
streamingContext.awaitTermination()
I am so stupid, I inverted calls to start() and awaitTermination()
If you want to do the same, you should read from HDFS, and add the file WHILE the program runs.

Encode PNG for delivery via JSON to Objective C/iOS device suddenly not working

SOLVED
There was an exercise in futility.
24 hours to discover it was a comma in the wrong place.
#sysMessage.messagePictureData = Base64.strict_encode64(open(imagestring).read),
#sysMessage.messagePuzzleNumber = "#{sendPiece}"
Two things concatenated and hours lost. Thanks to those who read this.
UPDATE
I now seem to have the files being read and encoded "properly" ... in that I'm seeing what I'm expecting in terms of encoded data. But I have no idea why the JSON is coming down enclosing the data in square brackets and quotes (and adding another value!).
messagePictureData = "[\"iVBORw0KGgoAAAANSUhEUgAAANgAAADYCAYAAACJIC3tAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAMSZJREFUeNrsnQmcFOWZ/9/qqr67p+e+mJPhmINBjhkYAQFFUMEo4JGN2Rz6j8lqotnsJqtxs6sxnzWa7Gpi1jMbNcb7wFsEuQTkZm ... [LOTS OF DATA] ... JZlUmyRSYkV+5NtGf5B2KdORE1ZLMK1kCCkD5oF1I8Pl+gu22gftAT7u4/1+AiwEhfhIPFXJaMLpvZFUlRMzsUNWIyMqs/QcwcTDKlBdsNrcHOwSZ13bQd2rn49yfASGdVDzofwWIHPsXy+PDylRK1g5FIBBiJRICRSKRPyME0+klIJHIwEmlM6P8LMACzov3UuDGSEwAAAABJRU5ErkJggg==\", \"30\"]";
How do I get Objective C to just read the first part in the brackets? I'm drawing a complete blank!
ORIGINAL POST
I'm sure there are better ways, but I have a Rails Server which was encoding PNG image files from AWS and, using RABL, encoding them into JSON (with a lot of other data) which was being delivered to my iOS device and being saved as a photo in Core Data.
It was all working, but now it's not and I am wondering if someone could take a look and see where it's going wrong. Sadly, I did a bundle update on my computer and this has updated a LOT of gems and Rails itself. Sigh. I consequently have no idea if there is a problem with one of the gems (and not a highly proficient debugger to try and sort it out) or my code.
Rails Controller code:
The code looks if there is a photo held on AWS. If there is, it looks for the URL, or provides a placeholder image URL if there isn't an image. I then need it to read the file and store it in a the database for delivery.
if (#photos.approvedMainPhoto == true)
imagestring = eval "#photos.photoImage#{sendPiece}.url"
else
imagestring = "#{::Rails.root}/public/images/placeholders/smallimageplaceholder.png"
end
#sysMessage = Message.new
...
#sysMessage.messagePictureData = File.open(imagestring, "r").read,
...
if (#sysMessage.save)
[true, #sysMessage.id]
else
[false, 0]
end
RABL Code
I've stripped some of the code out but I have coded that if there is picture data (and there isn't always going to be a photo), then encode it using Base64.strict_encode64 what is held in the database for transport.
object false
node(:message) { #code }
node(:number) { #nummessages }
child #messages, :object_root => false do
node(:messagePictureData, :if => lambda { |m| m.messageText == ""}) do |m|
Base64.strict_encode64(m.messagePictureData)
end
end
This seems to be delivering something. When I ask for the output in my XCode Console ... I get something similar to:
The JSON encoded data is saying:
messagePictureData = "WyJceDg5UE5HXHJcblx1MDAxQVxuXHUwMDAwXHUwMDAwXHUwMDAwXHJJSERSXHUwMDAwXHUwMDAwXHUwMDAwXHhEOFx1MDAwMFx1MDAwMFx1MDAwMFx4RDhcYlx1MDAwNlx1MDAwMFx1MDAwMFx1MDAwMFx4ODkgLVx4RURcdTAwMDBcdTAwMDBcdTAwMDBcdTAwMTl0RVh0U29mdHdhcmVcdTAwMDBBZG9iZSBJbWFnZVJlYWR5cVx4QzllPFx1MDAwMFx1MDAwMDEmSURBVHhceERBXHhFQ1x4OURcdFx4OUNcdTAwMTRceEU1XHg5OVx4RkZceERG6qq .... [LOTS MORE] ... HhDMWJcYT5ceEM1XHhGMlx4RjhceEYwXHhGMlx4OTVcdTAwMTJceEI1XHg4M1x4OTFIXHUwMDA0XHUwMDE4XHg4OURceDgwXHg5MUhceEE0T1x4QzhceEMxNFx4RkFJSCRyMFx1MDAxMmlMXHhFOFx4RkZcdjBcdTAwMDBceEIzXHhBMlx4RkTUuDFceDkyXHUwMDEzXHUwMDAwXHUwMDAwXHUwMDAwXHUwMDAwSUVORFx4QUVCYFx4ODIiLCAiMzAiXQ==";
When I put it through the following code in Objective C:
Objective C Code:
NSLog(#"I have the message Picture Data as: %#", message[#"messagePictureData");
NSData *photo = [[NSData alloc] initWithBase64EncodedString:message[#"messagePictureData"] options:NSDataBase64DecodingIgnoreUnknownCharacters];
NSLog(#"I have the photo data as: %#", photo);
I get the following output in the Console:
I have the photo data as: <5b225c78 3839504e 475c725c 6e5c7530 3031415c 6e5c7530 3030305c 75303030 305c7530 3030305c 72494844 ... [LOTS MORE] ... 3839445c 7838305c 78393148 5c784134 4f5c7843 385c7843 31345c78 46414948 2472305c 75303031 32694c5c 7845385c 7846465c 76305c75 30303030 5c784233 5c784132 5c784644 d4b8315c 7839325c 75303031 335c7530 3030305c 75303030 305c7530 3030305c 75303030 3049454e 445c7841 4542605c 78383222 2c202233 30225d>
It say's that it's saving the image in Core Data. However, what I'm getting are transparent squares and not photos.
And, every so often, when it's trying to get a slice from AWS, I get the following log error ... which also means nothing to me!
WARN: No such file or directory # rb_sysopen - https://XXXXXXXXXX.s3-ap-southeast-1.amazonaws.com/uploads/photo/photoImage20/77/photoImage20.jpg ... [lots more]
When I place the complete URL pointing to Amazon into Safari, it shows the photo perfectly. Sigh.
Any help would be gratefully received as, once again, I'm stumped.
There was an exercise in futility.
24 hours to discover it was a comma in the wrong place.
#sysMessage.messagePictureData = Base64.strict_encode64(open(imagestring).read),
#sysMessage.messagePuzzleNumber = "#{sendPiece}"
Two things concatenated and hours lost.
Thanks to those who read this.