If I want to serialize an array in Godot, I can do this:
var a1 = [1 ,2 ,3]
# save
var file = File.new()
file.open("a.sav", File.WRITE)
file.store_var(a1, true)
file.close()
# load
file.open("a.sav", File.READ)
var a2 = file.get_var(true)
file.close()
print(a1)
print(a2)
output (it works as expected):
[1, 2, 3]
[1, 2, 3]
But if I want to serialize an object, like this class in A.gd:
class_name A
var v = 0
Same test, with an instance of A:
# instance
var a1 = A.new()
a1.v = 10
# save
var file = File.new()
file.open("a.sav", File.WRITE)
file.store_var(a1, true)
file.close()
# load
file.open("a.sav", File.READ)
var a2 = file.get_var(true)
file.close()
print(a1.v)
print(a2.v)
output:
10
error (on line print(a2.v)):
Invalid get index 'v' (on base: 'previously freed instance').
From the online docs:
void store_var(value: Variant, full_objects: bool = false)
Stores any Variant value in the file. If full_objects is true, encoding objects is allowed (and can potentially include code).
Variant get_var(allow_objects: bool = false) const
Returns the next Variant value from the file. If allow_objects is true, decoding objects is allowed.
Warning: Deserialized objects can contain code which gets executed. Do not use this option if the serialized object comes from untrusted sources to avoid potential security threats such as remote code execution.
Isn't it supposed to work with full_objects=true? Otherwise, what's the purpose of this parameter?
My classes contains many arrays of arrays and other stuff. I guess Godot handle this kind of basic serialization functionality (of course, devs will often have to save complex data at one point), so, maybe I'm just not doing what I'm supposed to do.
Any idea?
For full_objects to work, your custom type must extend from Object (if you don't specify what your class extends, it extends Reference). And then, the serialization will be based on exported variables (or whatever you say in _get_property_list). By the way, this can, and in your case it likely is, serializing the whole script of your custom type. You can verify by looking at the saved file.
Thus, full_objects is not useful to serialize a type that extends Resource (which does not extend Object). Instead Resource serialization works with ResourceSaver, and ResourceLoader. Also with load and preload. And yes, this is how you would store or load scenes, and scripts (and textures, and meshes, and so on…).
I believe the simpler solution for your code is to use the functions str2var and var2str. These will save you a lot of headache:
# save
var file = File.new()
file.open("a.sav", File.WRITE)
file.store_pascal_string(var2str(a1))
file.close()
# load
file.open("a.sav", File.READ)
var a2 = str2var(file.get_pascal_string())
file.close()
print(a1.v)
print(a2.v)
That solution will work regardless of what is it you are storing.
Perhaps this is a solution (I haven't tested)
# load
file.open("a.sav", File.READ)
var a2 = A.new()
a2=file.get_var(true)
file.close()
print(a1.v)
print(a2.v)
Related
Having problems with my code, So I am currently trying to create a new directory and also store a text file within that folder I have created, I looked at a couple of examples but it seems like they only focus on a specific thing like how to create a file or a folder but never how to utilise both. How can I achieve this? I keep hitting exception errors when I try doing different methods, thanks!
val newFile : Int = 1
val fileString = "nameData"
//so we are creating variable to store the directory information
val folderDir = File("G:\\Random Projects\\JVM\\database\\Collection 1")
//we use that variable to create a File class which will create a folder called nameData
//this will also be stored in another variable called f
val f = File(folderDir, "nameData")
//this will create the actual folder based on the variable information
f.mkdir()
//creating file
try {
val fo = FileWriter(fileString, true)
fo.write(a)
fo.close()
} catch (ex:Exception) {
println("Something Went Wrong When Creating File!!")
}
The problem is that you probably don't have the whole folder structure created, that's why you usually use the mkdirs (note the s at the end) function. You can then use the writeBytes function to write the content:
val fileString = "nameData"
val folderDir = File("myfolder")
val f = File(folderDir, "nameData")
f.parentFile.mkdirs()
f.writeBytes(fileString.toByteArray())
I have a program which returns a stream of np.uin8 arrays. I would like to now broadcast these to a website being hosted by that computer.
I planned to do this by injecting the code in this documentation by replacing the line camera.start_recording(output, format='mjpeg') with output.write(<numpy_array_but_jpeg>). The documentation for start_recording states that if the write() method exists it will write the data in the requested format to that buffer.
I can find lots of stuff online that instructs on how to save a np.uint8 as a jpeg, but in my case I want to write that data to a buffer in memory, and I won't want to have to save the image to file and then read that file into the buffer.
Unfortunately, changing the output format of the np.uint8 earlier in the stream is not an option.
Thanks for any assistance. For simplicity I have copied the important bits of code below
class StreamingOutput(object):
def __init__(self):
self.frame = None
self.buffer = io.BytesIO()
self.condition = Condition()
def write(self, buf):
if buf.startswith(b'\xff\xd8'):
# New frame, copy the existing buffer's content and notify all
# clients it's available
self.buffer.truncate()
with self.condition:
self.frame = self.buffer.getvalue()
self.condition.notify_all()
self.buffer.seek(0)
return self.buffer.write(buf)
with picamera.PiCamera(resolution='640x480', framerate=24) as camera:
output = StreamingOutput()
camera.start_recording(output, format='mjpeg')
OpenCV has functions to do this
retval, buf = cv.imencode(ext,img[, params])
lets you write an array to a memory buffer.
This example here shows a basic implementation of what I was talking about.
img_encode = cv.imencode('.png', img)[1]
# Converting the image into numpy array
data_encode = np.array(img_encode)
# Converting the array to bytes.
byte_encode = data_encode.tobytes()
I am using Microsoft.Bond to serialize a class object which works perfectly fine. However, when I try to serialize a simple System.String object, the CompactBinaryWriter writes almost nothing to the output buffer. I am using this code:
string v = "test data";
var outputBuffer = new OutputBuffer();
var writer = new CompactBinaryWriter<OutputBuffer>(outputBuffer);
Serialize.To(writer, v);
var output = outputBuffer.Data;
output in this case is a one element array : {0}, irrespective of the value of v. Can someone point out why this doesn't work?
Bond requires a top-level Bond struct to perform serialization/deserialization.
If only one value needs to be passed/returned, the type bond.Box<T> can be used to quickly wrap a value in a Bond struct. (There's nothing special about bond.Box<T>, except that it ships with Bond.)
Try this:
Serialize.To(writer, Bond.Box.Create(v));
You'll need to deserialize into a bond.Box<string>.
There's an open issue about having better behavior in cases like this.
I'm creating an image upload API that takes files with POST requests. Here's the code:
def upload = Action(parse.temporaryFile) { request =>
val file = request.body.file
Ok(file.getName + " is uploaded!")
}
The file.getName returns something like: requestBody4386210151720036351asTemporaryFile
The question is how I could get the original filename instead of this temporary name? I checked the headers. There is nothing in it. I guess I could ask the client to pass the filename in the header. But should the original filename be included somewhere in the request?
All the parse.temporaryFile body parser does is store the raw bytes from the body as a local temporary file on the server. This has no semantics in terms of "file upload" as its normally understood. For that, you need to either ensure that all the other info is sent as query params, or (more typically) handle a multipart/form-data request, which is the standard way browsers send files (along with other form data).
For this, you can use the parse.multipartFormData body parser like so, assuming the form was submitted with a file field with name "image":
def upload = Action(parse.multipartFormData) { request =>
request.body.file("image").map { file =>
Ok(s"File uploaded: ${file.filename}")
}.getOrElse {
BadRequest("File is missing")
}
}
Relevant documentation.
It is not sent by default. You will need to send it specifically from the browser. For example, for an input tag, the files property will contain an array of the selected files, files[0].name containing the name of the first (or only) file. (I see there are possibly other properties besides name but they may differ per browser and I haven't played with them.) Use a change event to store the filename somewhere so that your controller can retrieve it. For example I have some jquery coffeescript like
$("#imageFile").change ->
fileName=$("#imageFile").val()
$("#imageName").val(fileName)
The value property also contains a version of the file name, but including the path (which is supposed to be something like "C:\fakepath" for security reasons, unless the site is a "trusted" site afaik.)
(More info and examples abound, W3 Schools, SO: Get Filename with JQuery, SO: Resolve path name and SO: Pass filename for example.)
As an example, this will print the original filename to the console and return it in the view.
def upload = Action(parse.multipartFormData(handleFilePartAsFile)) { implicit request =>
val fileOption = request.body.file("filename").map {
case FilePart(key, filename, contentType, file) =>
print(filename)
filename
}
Ok(s"filename = ${fileOption}")
}
/**
* Type of multipart file handler to be used by body parser
*/
type FilePartHandler[A] = FileInfo => Accumulator[ByteString, FilePart[A]]
/**
* A FilePartHandler which returns a File, rather than Play's TemporaryFile class.
*/
private def handleFilePartAsFile: FilePartHandler[File] = {
case FileInfo(partName, filename, contentType) =>
val attr = PosixFilePermissions.asFileAttribute(util.EnumSet.of(OWNER_READ, OWNER_WRITE))
val path: Path = Files.createTempFile("multipartBody", "tempFile", attr)
val file = path.toFile
val fileSink: Sink[ByteString, Future[IOResult]] = FileIO.toPath(file.toPath())
val accumulator: Accumulator[ByteString, IOResult] = Accumulator(fileSink)
accumulator.map {
case IOResult(count, status) =>
FilePart(partName, filename, contentType, file)
} (play.api.libs.concurrent.Execution.defaultContext)
}
I am using Google diff-match-patch JAVA plugin to create patch between two JSON strings and storing the patch to database.
diff_match_patch dmp = new diff_match_patch();
LinkedList<Patch> diffs = dmp.patch_make(latestString, originalString);
String patch = dmp.patch_toText(diffs); // Store patch to DB
Now is there any way to use this patch to re-create the originalString by passing the latestString?
I google about this and found this very old comment # Google diff-match-patch Wiki saying,
Unpatching can be done by just looping through the diff, swapping
DIFF_INSERT with DIFF_DELETE, then applying the patch.
But i did not find any useful code that demonstrates this. How could i achieve this with my existing code ? Any pointers or code reference would be appreciated.
Edit:
The problem i am facing is, in the front-end i am showing a revisions module that shows all the transactions of a particular fragment (take for example an employee details), like which user has updated what details etc. Now i am recreating the fragment JSON by reverse applying each patch to get the current transaction data and show it as a table (using http://marianoguerra.github.io/json.human.js/). But some JSON data are not valid JSON and I am getting JSON.parse error.
I was looking to do something similar (in C#) and what is working for me with a relatively simple object is the patch_apply method. This use case seems somewhat missing from the documentation, so I'm answering here. Code is C# but the API is cross language:
static void Main(string[] args)
{
var dmp = new diff_match_patch();
string v1 = "My Json Object;
string v2 = "My Mutated Json Object"
var v2ToV1Patch = dmp.patch_make(v2, v1);
var v2ToV1PatchText = dmp.patch_toText(v2ToV1Patch); // Persist text to db
string v3 = "Latest version of JSON object;
var v3ToV2Patch = dmp.patch_make(v3, v2);
var v3ToV2PatchTxt = dmp.patch_toText(v3ToV2Patch); // Persist text to db
// Time to re-hydrate the objects
var altV3ToV2Patch = dmp.patch_fromText(v3ToV2PatchTxt);
var altV2 = dmp.patch_apply(altV3ToV2Patch, v3)[0].ToString(); // .get(0) in Java I think
var altV2ToV1Patch = dmp.patch_fromText(v2ToV1PatchText);
var altV1 = dmp.patch_apply(altV2ToV1Patch, altV2)[0].ToString();
}
I am attempting to retrofit this as an audit log, where previously the entire JSON object was saved. As the audited objects have become more complex the storage requirements have increased dramatically. I haven't yet applied this to the complex large objects, but it is possible to check if the patch was successful by checking the second object in the array returned by the patch_apply method. This is an array of boolean values, all of which should be true if the patch worked correctly. You could write some code to check this, which would help check if the object can be successfully re-hydrated from the JSON rather than just getting a parsing error. My prototype C# method looks like this:
private static bool ValidatePatch(object[] patchResult, out string patchedString)
{
patchedString = patchResult[0] as string;
var successArray = patchResult[1] as bool[];
foreach (var b in successArray)
{
if (!b)
return false;
}
return true;
}