Removing first element of data array before passing it to Blob constructor breaks the output video - blob

I am working on a retroactive screen capture application in JS using mediaDevices, MediaRecorder, and the Blob constructor APIs. I want to store screen output, discard stale data past a certain threshold, and then let the user capture and output the remaining data as video. To generate the files I pass an array (chunks) of stream data to the Blob constructor. The problem is that I want to remove a certain number of leading elements from the array, but when I remove the first element of the array the output videos are broken.
For starters, the app can pass data into chunks array, make a blob, then output a video successfully. If i call chunks.shift() the array ->blob->mp4 chain no longer produces a functional video. Below is the function I wanted to use to trim stale data during an ongoing "monitoring" period, when toggled to "recording" instead of "monitoring" all data would be retained. But calling shift in this way breaks things.
function handleDataAvailable (e) {
if (monitoring){
// only do the shifting when we are monitoring not recording
if (chunks.length > stalenessThreshold) {
console.log("we are trimming data");
chunks.shift();
}
}
chunks.push(e.data);
}
I have tried putting the shift call elsewhere, including a single call rather than multiple calls and the same issue results, and actually if I add the first chunk I back to the front of the array, it works again, as below:
let first = chunks.shift()
chunks.shift()
chunks.pop()
chunks.unshift(first)
So I can remove elements from the middle and end of the chunks array as long as I add that first element back in.
I've also tried passing a slice of the chunks array to the Blob constructor, same result. I'm curious if I could trim the resulting blob, but I'm not well versed on slicing/working with blobs directly, and it also seems less performant that just trimming the input array. The Blob API call looks like this (I've tried with webm):
const blob = new Blob(chunks, { 'type' : 'video/mp4'})
I would like to understand this at a conceptual level, I've been reading docs and I can't make sense of it. Why is this first chunk essential? It seems the chunks arrray should be an array of blobs or blob like data that could be rearranged any old way? I would also like to find a solution for my application, so that I can monitor a set timeframe of recent screen activity. First stackoverlow question, hopefully it meets guidelines, thanks for reading, happy to provide more context and snippets.

Related

LabVIEW - How to clear an array after each iteration in a for loop

I'm trying to clear an array after each iteration of a for loop in LabVIEW, but the way I've implemented it has the values not going directly to what I want, but it changes with previous values in other parts of the array.
It isn't shown, but this code is inside of a for-loop that iterates through another numeric array.
I know that if I get the array to clear properly after each loop iteration, this should work. How do I do that? I'm a beginner at Labview but have been coding for awhile - help is appreciated!!!
[![labview add to array][2]][2]
It looks as if you're not quite used to how LabVIEW passes data around yet. There's no need to use lots of value property nodes for the same control or indicator within one structure; if you want to use the same data in more than one place, just branch the wire. Perhaps you're thinking that a LabVIEW control or indicator is equivalent to a variable in text languages, and you need to use a property node to get or set it. Instead, think of the wire as the variable. If you want to pass the output of one operation to the input of another, just wire the output to the input.
The indicators with terminals inside your loop will be updated with new values every loop iteration, and the code inside the loop should execute faster than a human can read those values, so once the loop has finished all the outputs except the final values will be lost. Is that what you intended, or do you want to accumulate or store them in some way?
I can see that in each loop iteration you're reading two values from a config file, and the section is specified by the string value of one element of the numeric array Array. You're displaying the two values in the indicators PICKERING and SUBUNIT. If you can describe in words (or pseudocode, or a text language you're used to) what manipulation of data you're actually trying to do in the rest of this code, we may be able to make more specific suggestions.
First of all, I'm assuming that the desired order of operations is the following:
Putting the value of Pickering into Array 2
Extracting from Array 2 the values to put in Pickering 1 and Pickering 2
Putting Array 2 back to its original value
If this is the case, with your current code you can't be sure that operation 1 will be executed be fore operation 2. In fact, the order of these operations can't be pre-determined. You must force the dataflow, for example by creating a sequence structure. You will put the code related to 1 in the first frame, then code related to operation 2 in the second.
Then, to put Array 2 back to it's original value I would add a third frame, where you force an empty array into the Value property node of Array 2 (the tool you use for pickering, but as input and not as output).
The sequence structure has to be inside the for loop.
I have never used the property node Reinit to default, so I can't help you with that.
Unfortunately I can't run Labview on this PC but I hope my explanation was clear enough, if not tell me and I will try to be more specific.

How to load CanvasSvgDocument from file?

In a C++/winrt project I have a large number of small svg resources to be loaded from file. Since it would be slow to reload them all from disk at each CreateResources event from the CanvasVirtualControl I have loaded them in advance and stored the data for each in an array. When CreateResources happens my intent is to load a CanvasSvgDocument for each of these by using the CanvasSvgDocument method LoadFromXml(System.string). However, If I create an svgDocument using the resourcecreator, I get an invalid argument crash when calling LoadFromXml(). The resourceCreator argument looks right (VS preview 6 now allows me to see local variables!) and the xml data string argument looks like the valid svg data, so my best guess about the crash is that the data string is the wrong format. The file data is UTF-8. If I convert that to a std::wstring as I must for the LoadFromXml argument can it still be understood as byte data?
For example, I create the std::wstring this way, given a pointer to unsigned char file data and its length in bytes:
m_data_string = std::wstring(data, data + dataLength);
When CreateResources is triggered that datastring is referenced this way:
m_svg = CanvasSvgDocument(resourceCreator);
m_svg.LoadFromXml(resourceCreator, m_data_string);
But LoadFromXml crashes with that invalid parameter error. I see that the length of the data string is correct, but of course that is the number of characters, not the actual size of the data. Could there be a conflict between the UTF-8 attribute in the svg and the fact that it is now recorded as 16-bit characters? If so, how would one load an xml document from such data?
[Update] with the suggestion that I use winrt::to_hstring. I read the unsigned char data into a std::string,
std::string cstring = std::string("");
cstring.assign(data, data + dataLength);
Then I convert that:
m_data_string = winrt::to_hstring(cstring);
And finally try to load an svg as before:
m_svg.LoadFromXml(resourceCreator, m_data_string);
And it crashes as before. I notice that in the debugger that converted string in neither case appeared to be gibberish - in both cases it read in the debugger as the expected svg data. But if this hstring is wide chars wouldn't that be a conflict with the attribute in the svg that identifies it as UTF-8?
[Update] I'm starting to wonder if anyone has ever used CanvasSvgDocument.Draw() to draw an svg loaded from a file. The files are now loading without crashing without any change to their internal encoding reference. But - they won't draw. These files - 239 of them - are UTF-8, svg 1.1, and they display nicely if opened in Edge or any browser. But if I load the file data to an hstring, create a CanvasSvgDocument and then use CanvasSvgDocument.LoadFromXml to load them, they do not draw when called by CanvasSvgDocument's draw method. Other drawing of shapes, etc. works fine during the drawing session. Here is what could be a hint: If I call GetXML() on one of these svgs after it is loaded, what is returned is just this:
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"></svg>
That is, the drawing information is not there. Or is this the full extent of what GetXml() is meant to return? That wouldn't seem useful. So perhaps CanvasSvgDocument.LoadFromXml(ResourceCreator, String) doesn't actually work yet?
So I'm back to asking again: is there a way to load a functional CanvasSvgDocument from file data?
My first answer here was wrong: the fault in my code above is that LoadFromXml() is a static method, and someone pointed out to me elsewhere, I was discarding the returned result. It should be theSvg = CanvasSvgDocument::LoadFromXml(string).
Having corrected that, I'm back to the problem of loading UTF-8 data in a method whose argument is a wide-character string. Changing the internal reference to UTF-16 doesn't help after all. Loading the svg with CanvasSvgDocument::LoadAsync(filestream) works, but if I want to load these without re-accessing the disk I will need to find a way to make RandomAccessFileStream from a buffer of bytes and then use LoadAsync. I think. Unless there is some other way to make LoadFromXmL() work - at present it fails with an invalid argument error.

what structure should I follow to log exception in a text file in json format? Objective-C

I am trying to log excretions in a text file in JSON format. Whole file is like a JSON object (an array of customeModle Class).
It works fine for first time but for next time when I go to log into the file I have to read it then add the new object into the array then delete previous and save it again and obviously it is not a good way to log errors.
Problems
Suppose there are many errors are getting logged at a single point of time and all are reading and appending the array then writing it back to the log file then many error won't be logged for sure.
It is consuming and wasting to much cpu and ram energy.
Please suggest a way to append new objects in the existing file without overwriting it.
Many thanks for your help you may offer.
Per Apple Documentation, you can open a file (output stream) in append mode.
Given you hold a reference to file output stream outStream, you can use below method to append data:
[NSJsonSerialization writeJSONObject:myNewObject toStream:outStream options:1 error:&error]
However, I would personally use the option you are already doing - read the data in mutable object, modify and then use NSJSONSerialization to convert it back to data again. Finally, save that data to disk - replacing the original. As this keeps JSON structure intact.

JSON issue with SQL lines return

I have an issue when I try to parse my JSON. I create my JSON "by my hand" like this in PHP :
$outp ='{"records":['.$outp.']}'; and I create it so I can take field from my database to show them in the page. The thing is, in my database I have a field "description" where people can give a description about something. So some people make return to line like this for example :
Interphone
Equipe:
Canape-lit
Autre:
Local
And when I try to parse my JSON there is an error because of these line's return. "SyntaxError: Unexpected token".
Here's an example of my JSON :
{"records":[{"Parking":"Aucun","Description":"Interphone
Equipé :
Canapé-lit
","Chauffage":"Fioul"}]}
Can someone help me please ?
You've really dug yourself into a very bad hole here.
The problem
The problem you're running into is that a newline (line feed and carriage return characters) are not valid JSON. They must be escaped as \n and \r. You can see the full JSON standard here here.
You need to do two things.
Fix your code
In spite of the fact that the JSON standard is comparatively simple, you should not create your JSON by hand. You already know why. You have to handle several edge cases and the like. Your users could enter anything on the page, and you need to make sure that it gets properly encoded no matter what.
You need to use a JSON serialization tool. json_encode is built in as of 5.2. If you can't use this for any reason, find an existing, widely used (and therefore heavily tested) third party library with a JSON serializer.
If you're asking, "Why can't I create my own serializer?", you could, in theory. Realistically, there is no point. Yours won't be better than existing ones. It will be much more likely to have bugs and to perform worse than something a lot of people have used in production. It will also take much longer to create and test than using an existing one.
If you need this data in code after you pull it back out of the database, then you need a JSON deserializer. json_decode should also be fine, but again, if you can't use it, look for a widely used third party library.
Fix your data
If you haven't hit production yet, you have really dodged a bullet here, and you can skip this whole section. If you have gone to production and you have data from users, you've got a major problem.
Even after you fix your code, you still have bad data in your production database that won't parse correctly. You have to do something to make this data usable. Unfortunately, it is impossible to automatically recover the original data for every possible case. This is because users might have entered the characters/substrings you added to the data to turn it into "JSON"; for example, they might have entered a comma separated list of quoted words: "dog","cat","pig", and "cow". That is an intractable problem, since you know for a fact you didn't properly serialize all your incoming input. There's no way to tell the difference between text your code generated and text the user entered. You're going to have to settle for a best effort and try to throw errors when you can't figure it out in code, and it might mess up a user's data in some special cases. You might have to fix some things manually.
Start by discussing this with your manager, team lead, whoever you answer to. Assuming that you can't lose the data, this is the most sound process to follow for creating a fix for your data:
Create a database dump of your production data.
Import that dump into a development database.
Develop and test your method of repairing this data against the development database from the last step.
Ensure you have a recovery plan for deployments gone wrong. Test this plan in your testing environment.
Once you've gone through your typical release process, it's time to release the fixed code and the data update together.
Take the website offline.
Back up the database.
Update the website with the new code.
Implement your data fix.
Verify that it worked.
Bring the site online.
If your data fix doesn't work (possibly because you didn't think of an edge case or something), then you have a nice back up you can restore and you can cancel the release. Then go back to step 1.
As for how you can fix the data, I don't recommend queries here. I recommend a little script tool. It would have to load the data from the database, pull the string apart, try to identify all the pieces, build up an object from those pieces, and finally serialize them to JSON correctly, and put them back into the database.
Here's an example function of how you might go about pulling the string apart:
const ELEMENT_SEPARATOR = '","';
const PAIR_SEPARATOR = '":"';
function recover_object_from_malformed_json($malformed_json, $known_keys) {
$tempData = substr($malformed_json, 14); // Removes {"records":[{" prefix
$tempData = substr($tempData, 0, -4); // Removes "}]} suffix
$tempData = explode(ELEMENT_SEPARATOR, $tempData); // Split into what we think are pairs
$data = array();
$lastKey = NULL;
foreach ($tempData as $i) {
$explodedI = explode(KEY_VALUE_SEPARATOR, $i, 2); // Split what we think is a key/value into key and value
if (in_array($explodedI[0], $known_keys)) { // Check if it's actually a key
// It's a key
$lastKey = $explodedI[0];
if (array_key_exists($lastKey, $data)) {
throw new RuntimeException('Duplicate key: ' + $lastKey);
}
// Assign the value to the key
$data[$lastKey] = $explodedI[1];
}
else {
// This isn't a key vlue pair, near as we can tell
// So it must actually be part of the last value,
// and the user actually entered the delimiter as part of the value.
if (is_null($lastKey)) {
// This one is REALLY messed up
throw new RuntimeException('Does not begin with a known key');
}
$data[$lastKey] += ELEMENT_SEPARATOR;
$data[$lastKey] += $i;
}
}
return $data;
}
Note that I'm assuming that your "list" is a single element. This gets much harder and much messier if you have more than one. You'll also need to know ahead of time what keys you expect to have. The bottom line is that you have to undo whatever your code did to create the "JSON", and you have to do everything you can to try to not mess up a user's data.
You would use it something like this:
$knownKeys = ["Parking", "Description", "Chauffage"];
// Fetch your rows and loop over them
foreach ($dbRows as $row) {
try {
$dataFromDb = $row.myData // or however you would pull out this string.
$recoveredData = recover_object_from_malformed_json($dataFromDb);
// Save it back to the DB
$row.myData = json_encode($recoveredData);
// Make sure to commit here.
}
catch (Exception $e) {
// Log the row's ID, the content that couldn't be fixed, and the exception
// Make sure to roll back here
}
}
(Forgive me if the database stuff looks really wonky. I don't do PHP, so I have no idea how that code should look. Hopefully, you can at least get the concept.)
Why I don't recommend trying to parse your data as JSON to recover it.
The bottom line is that your data in the database is not JSON. IF you try to parse it as such, all the other edge cases you didn't handle properly will get screwed up in the process. You'll see bad things like
\\ becomes \
\j becomes j
\t becomes a tab character
In the end, it will just mess up your data even more.
Conclusion
This is a huge mess, and you should never try to convert something into a standard format without using a properly built, well tested serializer. Fixing the data is going to be hard, and it's going to take time. I also seriously doubt you have a lot of background in text processing techniques, and lacking that knowledge is going to make this harder. You can get some good info on text processing by studying how compilers are made. Good luck.

SBJSON append new data into existing JSON file without parsing it first

I am making an app that lets the user draw on the screen in different colors and brush sizes. I am storing the info about each drawn path in a JSON file once it has been drawn to keep it out of memory. Right now I have it parsing all existing paths, then adding the new one in and writing it all back out again. I want it to simply append the new data into the JSON file without having to read it in and parse it first, that will make it so only one path is ever in memory at a time.
I am using SBJSON, the JSONWriter has a few append functions but I think you need to have the JSON string to append it to first, not the file, meaning I would have to read in the file anyway. Is there a way to do this without reading in the file at all? I know exactly how the data is structured.
It's possible, but you have to cheat a little. You can just create a stand-alone JSON document per path, and append that to the file. So you'll have something like this in your file:
{"name":"path1", "from": [0,3], "to":[3, 9]}
{"name":"path2", "from": [0,3], "to":[3, 9]}
{"name":"path3", "from": [0,3], "to":[3, 9]}
Note that this is not ONE JSON document but THREE. Handily, however, SBJsonStreamParser supports reading multiple JSON documents in one go. Set the supportMultipleDocuments property and plug it into a SBJsonStreamParserAdapter, and off you go. This also has the benefit that if you have many, many paths in your file as you can start drawing before you're finished reading the whole file. (Because you get a callback for each path.)
You can see some information on the use case here.
I'm pretty sure its not possible...what I ended up doing was reading in the JSON file as a string then instead of wasting memory changing all that into Dictionaries and Arrays, I just looked for an instance of part of the string (ex: i wanted to insert something before the string "], "texts"" showed up) where I wanted to insert data and inserted it there and wrote it back out to file.
As far as I can tell this is the best solution.