How to load CanvasSvgDocument from file? - c++-winrt

In a C++/winrt project I have a large number of small svg resources to be loaded from file. Since it would be slow to reload them all from disk at each CreateResources event from the CanvasVirtualControl I have loaded them in advance and stored the data for each in an array. When CreateResources happens my intent is to load a CanvasSvgDocument for each of these by using the CanvasSvgDocument method LoadFromXml(System.string). However, If I create an svgDocument using the resourcecreator, I get an invalid argument crash when calling LoadFromXml(). The resourceCreator argument looks right (VS preview 6 now allows me to see local variables!) and the xml data string argument looks like the valid svg data, so my best guess about the crash is that the data string is the wrong format. The file data is UTF-8. If I convert that to a std::wstring as I must for the LoadFromXml argument can it still be understood as byte data?
For example, I create the std::wstring this way, given a pointer to unsigned char file data and its length in bytes:
m_data_string = std::wstring(data, data + dataLength);
When CreateResources is triggered that datastring is referenced this way:
m_svg = CanvasSvgDocument(resourceCreator);
m_svg.LoadFromXml(resourceCreator, m_data_string);
But LoadFromXml crashes with that invalid parameter error. I see that the length of the data string is correct, but of course that is the number of characters, not the actual size of the data. Could there be a conflict between the UTF-8 attribute in the svg and the fact that it is now recorded as 16-bit characters? If so, how would one load an xml document from such data?
[Update] with the suggestion that I use winrt::to_hstring. I read the unsigned char data into a std::string,
std::string cstring = std::string("");
cstring.assign(data, data + dataLength);
Then I convert that:
m_data_string = winrt::to_hstring(cstring);
And finally try to load an svg as before:
m_svg.LoadFromXml(resourceCreator, m_data_string);
And it crashes as before. I notice that in the debugger that converted string in neither case appeared to be gibberish - in both cases it read in the debugger as the expected svg data. But if this hstring is wide chars wouldn't that be a conflict with the attribute in the svg that identifies it as UTF-8?
[Update] I'm starting to wonder if anyone has ever used CanvasSvgDocument.Draw() to draw an svg loaded from a file. The files are now loading without crashing without any change to their internal encoding reference. But - they won't draw. These files - 239 of them - are UTF-8, svg 1.1, and they display nicely if opened in Edge or any browser. But if I load the file data to an hstring, create a CanvasSvgDocument and then use CanvasSvgDocument.LoadFromXml to load them, they do not draw when called by CanvasSvgDocument's draw method. Other drawing of shapes, etc. works fine during the drawing session. Here is what could be a hint: If I call GetXML() on one of these svgs after it is loaded, what is returned is just this:
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"></svg>
That is, the drawing information is not there. Or is this the full extent of what GetXml() is meant to return? That wouldn't seem useful. So perhaps CanvasSvgDocument.LoadFromXml(ResourceCreator, String) doesn't actually work yet?
So I'm back to asking again: is there a way to load a functional CanvasSvgDocument from file data?

My first answer here was wrong: the fault in my code above is that LoadFromXml() is a static method, and someone pointed out to me elsewhere, I was discarding the returned result. It should be theSvg = CanvasSvgDocument::LoadFromXml(string).
Having corrected that, I'm back to the problem of loading UTF-8 data in a method whose argument is a wide-character string. Changing the internal reference to UTF-16 doesn't help after all. Loading the svg with CanvasSvgDocument::LoadAsync(filestream) works, but if I want to load these without re-accessing the disk I will need to find a way to make RandomAccessFileStream from a buffer of bytes and then use LoadAsync. I think. Unless there is some other way to make LoadFromXmL() work - at present it fails with an invalid argument error.

Related

Removing first element of data array before passing it to Blob constructor breaks the output video

I am working on a retroactive screen capture application in JS using mediaDevices, MediaRecorder, and the Blob constructor APIs. I want to store screen output, discard stale data past a certain threshold, and then let the user capture and output the remaining data as video. To generate the files I pass an array (chunks) of stream data to the Blob constructor. The problem is that I want to remove a certain number of leading elements from the array, but when I remove the first element of the array the output videos are broken.
For starters, the app can pass data into chunks array, make a blob, then output a video successfully. If i call chunks.shift() the array ->blob->mp4 chain no longer produces a functional video. Below is the function I wanted to use to trim stale data during an ongoing "monitoring" period, when toggled to "recording" instead of "monitoring" all data would be retained. But calling shift in this way breaks things.
function handleDataAvailable (e) {
if (monitoring){
// only do the shifting when we are monitoring not recording
if (chunks.length > stalenessThreshold) {
console.log("we are trimming data");
chunks.shift();
}
}
chunks.push(e.data);
}
I have tried putting the shift call elsewhere, including a single call rather than multiple calls and the same issue results, and actually if I add the first chunk I back to the front of the array, it works again, as below:
let first = chunks.shift()
chunks.shift()
chunks.pop()
chunks.unshift(first)
So I can remove elements from the middle and end of the chunks array as long as I add that first element back in.
I've also tried passing a slice of the chunks array to the Blob constructor, same result. I'm curious if I could trim the resulting blob, but I'm not well versed on slicing/working with blobs directly, and it also seems less performant that just trimming the input array. The Blob API call looks like this (I've tried with webm):
const blob = new Blob(chunks, { 'type' : 'video/mp4'})
I would like to understand this at a conceptual level, I've been reading docs and I can't make sense of it. Why is this first chunk essential? It seems the chunks arrray should be an array of blobs or blob like data that could be rearranged any old way? I would also like to find a solution for my application, so that I can monitor a set timeframe of recent screen activity. First stackoverlow question, hopefully it meets guidelines, thanks for reading, happy to provide more context and snippets.

what structure should I follow to log exception in a text file in json format? Objective-C

I am trying to log excretions in a text file in JSON format. Whole file is like a JSON object (an array of customeModle Class).
It works fine for first time but for next time when I go to log into the file I have to read it then add the new object into the array then delete previous and save it again and obviously it is not a good way to log errors.
Problems
Suppose there are many errors are getting logged at a single point of time and all are reading and appending the array then writing it back to the log file then many error won't be logged for sure.
It is consuming and wasting to much cpu and ram energy.
Please suggest a way to append new objects in the existing file without overwriting it.
Many thanks for your help you may offer.
Per Apple Documentation, you can open a file (output stream) in append mode.
Given you hold a reference to file output stream outStream, you can use below method to append data:
[NSJsonSerialization writeJSONObject:myNewObject toStream:outStream options:1 error:&error]
However, I would personally use the option you are already doing - read the data in mutable object, modify and then use NSJSONSerialization to convert it back to data again. Finally, save that data to disk - replacing the original. As this keeps JSON structure intact.

VTK: The data array in the element may be too short

I'm trying to visualize some data in vtr format. For this purposes I've created a couple npy files by this library, then I've converted this files by PyEVTK into the vtr format (like in the lowlevel.py example). But when I'm trying to visualize this data by ParaView, an error appears:
ERROR: In /var/tmp/portage/sci-visualization/paraview-4.0.1-r1/work/ParaView-v4.0.1-source/VTK/IO/XML/vtkXMLDataReader.cxx, line 510
vtkXMLRectilinearGridReader (0x36bb080): Cannot read point data array "Pressure" from PointData in piece 0. The data array in the element may be too short.
Can anybody explain, what exactly means this error message, and what's wrong with the my visualization data?
Solved:
I made a stupid mistake - data size in header was different from the actual data size, and this was the cause of error.
This error may be coming from the XML header declaration, that may not contain all data needed. You may miss the header_type that contains the size of each info written between each set of data.
<VTKFile type="UnstructuredGrid" version="0.1" byte_order="BigEndian" header_type="UInt64">

Vb.NET Insert Image File to Picturebox and store it to database.mdf

I have a table which have image data type in one of the colomn
, I try this code,
first, is this a right code to insert one row of the table?
Me.DatasiswaTableAdapter.Insert(NISTextBox.Text, NamaTextBox.Text, KelasTextBox.Text, JurusanTextBox.Text, Jenis_KelaminComboBox.Text, Tanggal_LahirDateTimePicker.ToString, AlamatRichTextBox.Text, FotoPictureBox.Image)
if yes then I need to know why the image type is change to byte? and
how to insert the image from picturebox if the format of the code is
a byte?
then if no please let me know the right code
Sorry for my bad English :)
Thanks
the net-informations link you posted should be very helpful if you study it.
it means a)the column you want to store the image to in SQL server must be defined as image. If you are using a different DB such as Access, depending on the version you might need to define it as Object or something similar. b) in the example, the image is converted to a stream (using image.save) then the stream to a byte array for storing in the DB.
you may want to store a bit more information about the image. getting it back to an image will just be the reverse (db -> byte array -> stream), but you are not going to know whether it was JPG, PNG, TIFF etc. If you think you will ever want to recreate it as a disk file in its original format, store the MIME type as well.

Issues getting all data from an image file using Lua io.read('*a')

I'm trying to get all the data from image file (jpg/jpeg/gif/png/bmp etc.) use Lua's io.read() function, but I'm not having much luck as it only seems to read a small piece of the data.
As a side note all plain text files are being read just fine, so I'm assuming that the problem is with character encoding or some such thing.
Example:
local data
local fileHandle
fileHandle = io.open ( 'pic.jpg')
data = fileHandle:read('*a')
print(data)
If you're on Windows, open file as binary: io.open('pic.jpg', 'rb').
Also it is a good idea to wrap io.open() in assert() to catch errors (or do handle them otherwise, of course).