I have a program which returns a stream of np.uin8 arrays. I would like to now broadcast these to a website being hosted by that computer.
I planned to do this by injecting the code in this documentation by replacing the line camera.start_recording(output, format='mjpeg') with output.write(<numpy_array_but_jpeg>). The documentation for start_recording states that if the write() method exists it will write the data in the requested format to that buffer.
I can find lots of stuff online that instructs on how to save a np.uint8 as a jpeg, but in my case I want to write that data to a buffer in memory, and I won't want to have to save the image to file and then read that file into the buffer.
Unfortunately, changing the output format of the np.uint8 earlier in the stream is not an option.
Thanks for any assistance. For simplicity I have copied the important bits of code below
class StreamingOutput(object):
def __init__(self):
self.frame = None
self.buffer = io.BytesIO()
self.condition = Condition()
def write(self, buf):
if buf.startswith(b'\xff\xd8'):
# New frame, copy the existing buffer's content and notify all
# clients it's available
self.buffer.truncate()
with self.condition:
self.frame = self.buffer.getvalue()
self.condition.notify_all()
self.buffer.seek(0)
return self.buffer.write(buf)
with picamera.PiCamera(resolution='640x480', framerate=24) as camera:
output = StreamingOutput()
camera.start_recording(output, format='mjpeg')
OpenCV has functions to do this
retval, buf = cv.imencode(ext,img[, params])
lets you write an array to a memory buffer.
This example here shows a basic implementation of what I was talking about.
img_encode = cv.imencode('.png', img)[1]
# Converting the image into numpy array
data_encode = np.array(img_encode)
# Converting the array to bytes.
byte_encode = data_encode.tobytes()
Related
If I want to serialize an array in Godot, I can do this:
var a1 = [1 ,2 ,3]
# save
var file = File.new()
file.open("a.sav", File.WRITE)
file.store_var(a1, true)
file.close()
# load
file.open("a.sav", File.READ)
var a2 = file.get_var(true)
file.close()
print(a1)
print(a2)
output (it works as expected):
[1, 2, 3]
[1, 2, 3]
But if I want to serialize an object, like this class in A.gd:
class_name A
var v = 0
Same test, with an instance of A:
# instance
var a1 = A.new()
a1.v = 10
# save
var file = File.new()
file.open("a.sav", File.WRITE)
file.store_var(a1, true)
file.close()
# load
file.open("a.sav", File.READ)
var a2 = file.get_var(true)
file.close()
print(a1.v)
print(a2.v)
output:
10
error (on line print(a2.v)):
Invalid get index 'v' (on base: 'previously freed instance').
From the online docs:
void store_var(value: Variant, full_objects: bool = false)
Stores any Variant value in the file. If full_objects is true, encoding objects is allowed (and can potentially include code).
Variant get_var(allow_objects: bool = false) const
Returns the next Variant value from the file. If allow_objects is true, decoding objects is allowed.
Warning: Deserialized objects can contain code which gets executed. Do not use this option if the serialized object comes from untrusted sources to avoid potential security threats such as remote code execution.
Isn't it supposed to work with full_objects=true? Otherwise, what's the purpose of this parameter?
My classes contains many arrays of arrays and other stuff. I guess Godot handle this kind of basic serialization functionality (of course, devs will often have to save complex data at one point), so, maybe I'm just not doing what I'm supposed to do.
Any idea?
For full_objects to work, your custom type must extend from Object (if you don't specify what your class extends, it extends Reference). And then, the serialization will be based on exported variables (or whatever you say in _get_property_list). By the way, this can, and in your case it likely is, serializing the whole script of your custom type. You can verify by looking at the saved file.
Thus, full_objects is not useful to serialize a type that extends Resource (which does not extend Object). Instead Resource serialization works with ResourceSaver, and ResourceLoader. Also with load and preload. And yes, this is how you would store or load scenes, and scripts (and textures, and meshes, and so on…).
I believe the simpler solution for your code is to use the functions str2var and var2str. These will save you a lot of headache:
# save
var file = File.new()
file.open("a.sav", File.WRITE)
file.store_pascal_string(var2str(a1))
file.close()
# load
file.open("a.sav", File.READ)
var a2 = str2var(file.get_pascal_string())
file.close()
print(a1.v)
print(a2.v)
That solution will work regardless of what is it you are storing.
Perhaps this is a solution (I haven't tested)
# load
file.open("a.sav", File.READ)
var a2 = A.new()
a2=file.get_var(true)
file.close()
print(a1.v)
print(a2.v)
Is it possible just to find out locations of PDF pages in byte array?
At the moment I parse full PDF in order to find out page bytes:
public static List<byte[]> splitPdf(byte[] pdfDocument) throws Exception {
InputStream inputStream = new ByteArrayInputStream(pdfDocument);
PDDocument document = PDDocument.load(inputStream);
Splitter splitter = new Splitter();
List<PDDocument> PDDocs = splitter.split(document);
inputStream.close();
List<byte[]> pages = PDDocs.stream()
.map(PDFUtils::getResult).collect(Collectors.toList());
}
private static byte[] getResult(PDDocument pd) {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
pd.save(byteArrayOutputStream);
return byteArrayOutputStream.toByteArray();
}
My code works very well but
I created additional List< byte[] > to save page bytes. I would like just to have byte locations - If I know byte indexes of page (page start location, page end location) I'll extract this from main byte array.
So might be I can find this information in PDF header or somewhere...
Right now I'm trying to optimize memory, because I parse hundreds of documents in parallel. So I don't want to create duplicate arrays.
If I know byte indexes of page (page start location, page end location) I'll extract this from main byte array.
As #Amedee already hinted at in a comment, there is not simply a section of the pdf for each page respectively.
A pdf is constructed from multiple objects (content streams, font resources, image resources,...) and two pages may use the same objects (e.g. use the same fonts or images). Furthermore, a pdf may contain unused objects.
So already the sum of the sizes of your partial pdfs may be smaller than, greater than, or even equal to the size of the full pdf.
Is it possible to create stream from com.fasterxml.jackson.databind.node.ArrayNode?
I tried:
ArrayNode files = (ArrayNode) json.get("files");
Stream<JsonNode> stream = Stream.of(files);
But it will actually give stream of one element, the initial ArrayNode object.
Correct result should be Stream<JsonNode>, can I achieve it?
ArrayNode implements Iterable. Iterable has a spliterator() method. You can create a sequential Stream from a Spliterator using
ArrayNode arrayNode = (ArrayNode) json.get("xyz");
StreamSupport.stream(arrayNode.spliterator(), false)
An ArrayNode class provides random access: you can get size() and an element by index (using get(index)). This is all you need to create a good stream:
Stream<JsonNode> nodes = IntStream.range(0, files.size()).mapToObj(files::get);
Note that this solution is better than using default spliterator (as suggested by other answerers) as it can split well and reports the size properly. Even if you don't care about parallel processing, some operations like toArray() will work more effectively as knowing the size in advance will help to allocate an array of proper size.
ArrayNode#elements returns an Iterator over it's elements you can use that to create a Stream (by leveraging StreamSupport). StreamSupport requires a Spliterator and to create a Spliterator from an Iterator you can use the Spliterators class.
ArrayNode files = (ArrayNode) json.get("files");
Stream<JsonNode> elementStream = StreamSupport.stream(Spliterators
.spliteratorUnknownSize(files.elements(),
Spliterator.ORDERED),false);
cyclops-streams has a StreamUtils class has a static method that makes this a bit cleaner (I am the author).
ArrayNode files = (ArrayNode) json.get("files");
Stream<JsonNode> elementStream = StreamUtils.stream(files.elements());
Taking into account #JB Nizet's answer that ArrayNode is an iterable with StreamUtils you can pass in the ArrayNode and get the Stream back directly.
Stream<JsonNode> elementStream = StreamUtils.stream((ArrayNode) json.get("files"));
I have some code that reads this:
int SaveTemplateToFile(char* Name, FTR_DATA Template )
{
//NSLog(#"trying to save template to file");
FILE *fp;
fp = fopen( Name, "w+b");
if( fp == NULL ) return FALSE;
int Result = fwrite( Template.pData, 1, Template.dwSize, fp ) == Template.dwSize ? TRUE : FALSE;
fclose(fp);
return Result;
}
I understand that this will write out the data retrieved from Template.pData into a file named whatever is stored in the Name variable.
This is what the .tmpl's contents reads:
Task/Question:
I am simply trying to store this data into a variable so that I can send this data to my webserver database and store it in a blob file for retrieval at a later time. This will also allow me to get rid of the fwrite function which I wont need since im storing everything onto the sebserver instead of storing it locally.
I am currently finding trouble reading this data. I am getting a crash when trying to output this data array, I also present what the datatype structure looks like:
Where DGTVOID is of typedef void DGTVOID.
How can I correctly read the contents of template? I was thinking if I understood what datatype it is, then I would be able to retrieve the data correcty.
Update 1
Thanks to Paulw11 I am able access a very small portion of the data using %s instead of %# which originally lead to a crash. Here is what is being printed now, a few funky upside down question marks:
Is there a way to output the contents of this datastream from Template.pData without having to save the data onto the direction first as a file?
I think the first thing you should do is convert your buffer to an NSData instance -
NSData template = [NSData dataWithBytes:Template.pData length:Template.dwSize];
Once you have that then you can Base64 encode the data for transmission over a web request -
NSString *templateStr = [template base64EncodedStringWithOptions:0];
If you are targeting a version earlier than iOS7 then you can use the deprecated method
NSString *templateStr = [template base64Encoding];
I am using the following piece of code to load images as thumbnails to a FlowLayoutPanel control. Unfortunately i get an OutOfMemory exception.
As you already guess the memory leak is found at line
Pedit.Image = System.Drawing.Image.FromStream(fs)
So how could i optimize the following code?
Private Sub LoadImagesCommon(ByVal FlowPanel As FlowLayoutPanel, ByVal fi As FileInfo)
Pedit = New DevExpress.XtraEditors.PictureEdit
Pedit.Width = txtIconsWidth.EditValue
Pedit.Height = Pedit.Width / (4 / 3)
Dim fs As System.IO.FileStream
fs = New System.IO.FileStream(fi.FullName, IO.FileMode.Open, IO.FileAccess.Read)
Pedit.Image = System.Drawing.Image.FromStream(fs)
fs.Close()
fs.Dispose()
Pedit.Properties.SizeMode = DevExpress.XtraEditors.Controls.PictureSizeMode.Zoom
If FlowPanel Is flowR Then
AddHandler Pedit.MouseClick, AddressOf Pedit_MouseClick
AddHandler Pedit.MouseEnter, AddressOf Pedit_MouseEnter
AddHandler Pedit.MouseLeave, AddressOf Pedit_MouseLeave
End If
FlowPanel.Controls.Add(Pedit)
End Sub
Update: The problem occurs while loading a number of images (3264x2448px at 300dpi - each image is about 3Mb's)
Documentation for Image.FromFile (which is related to your FromStream) says that it will throw OutOfMemoryException if the file is not a valid image format or if GDI+ doesn't support the pixel format. Is it possible you're trying to load an unsupported image type?
Also, documentation for Image.FromStream says that you have to keep the stream open for the lifetime of the image, so even if your code loaded the image you'd probably get an error because you're closing the file while the image is still active. See http://msdn.microsoft.com/en-us/library/93z9ee4x.aspx.
Couple of thoughts:
First off, as Jim has stated, when using Image.FromStream the stream should remain open for the lifetime of the Image as remarked on the MSDN page. As such, I would suggest to copy the contents of the file to a MemoryStream, and use the latter to create the Image instance. So you can release the file handle asap.
Secondly, the images you're using are rather big (uncompressed, as they would exist in memory, Width x Height x BytesPerPixel). Assuming the context you use them in might allow for them to be smaller, consider resizing them, and potentially caching the resized versions somewhere for later use.
Lastly, don't forget to Dispose the image and the Stream when they are no longer needed.
You can solve this in a few steps:
to get free from the File-dependency, you have to copy the images. By really drawing it to a new Bitmap, you can't just copy it.
since you want thumbnails, and your source-bitmaps are rather large, combine this with shrinking the images.
I had the same problem. Jim Mischel answer led me to discover loading an innocent .txt file was the culprit. Here's my method in case anyone is interested.
Here's my method:
/// <summary>
/// Loads every image from the folder specified as param.
/// </summary>
/// <param name="pDirectory">Path to the directory from which you want to load images.
/// NOTE: this method will throws exceptions if the argument causes
/// <code>Directory.GetFiles(path)</code> to throw an exception.</param>
/// <returns>An ImageList, if no files are found, it'll be empty (not null).</returns>
public static ImageList InitImageListFromDirectory(string pDirectory)
{
ImageList imageList = new ImageList();
foreach (string f in System.IO.Directory.GetFiles(pDirectory))
{
try
{
Image img = Image.FromFile(f);
imageList.Images.Add(img);
}
catch
{
// Out of Memory Exceptions are thrown in Image.FromFile if you pass in a non-image file.
}
}
return imageList;
}