I am making an app that lets the user draw on the screen in different colors and brush sizes. I am storing the info about each drawn path in a JSON file once it has been drawn to keep it out of memory. Right now I have it parsing all existing paths, then adding the new one in and writing it all back out again. I want it to simply append the new data into the JSON file without having to read it in and parse it first, that will make it so only one path is ever in memory at a time.
I am using SBJSON, the JSONWriter has a few append functions but I think you need to have the JSON string to append it to first, not the file, meaning I would have to read in the file anyway. Is there a way to do this without reading in the file at all? I know exactly how the data is structured.
It's possible, but you have to cheat a little. You can just create a stand-alone JSON document per path, and append that to the file. So you'll have something like this in your file:
{"name":"path1", "from": [0,3], "to":[3, 9]}
{"name":"path2", "from": [0,3], "to":[3, 9]}
{"name":"path3", "from": [0,3], "to":[3, 9]}
Note that this is not ONE JSON document but THREE. Handily, however, SBJsonStreamParser supports reading multiple JSON documents in one go. Set the supportMultipleDocuments property and plug it into a SBJsonStreamParserAdapter, and off you go. This also has the benefit that if you have many, many paths in your file as you can start drawing before you're finished reading the whole file. (Because you get a callback for each path.)
You can see some information on the use case here.
I'm pretty sure its not possible...what I ended up doing was reading in the JSON file as a string then instead of wasting memory changing all that into Dictionaries and Arrays, I just looked for an instance of part of the string (ex: i wanted to insert something before the string "], "texts"" showed up) where I wanted to insert data and inserted it there and wrote it back out to file.
As far as I can tell this is the best solution.
Related
i am using serilog in asp net core application and using a json formatter to create a daily log file (rolling interval is set to "Day").
When i look at my file each event is itself valid json but the file in a whole is not which makes looking at it in something like code beautify impractical.
Is there a way to tell the serilog to add a comma between the events so that the file will be valid.
I think you are incorrect when you say that your log file would conform to json if commas separated each line. A valid json document would either start with { and describe an object, or start with [ and describe an array. Either way, you would have to close the document, and only then it would be valid json. Now to the million dollar question: how would you know when to close the document even if you wrote your own text formatter?
I think you should treat each log event as a valid json object, and use a tool and product that supports it.
I am trying to log excretions in a text file in JSON format. Whole file is like a JSON object (an array of customeModle Class).
It works fine for first time but for next time when I go to log into the file I have to read it then add the new object into the array then delete previous and save it again and obviously it is not a good way to log errors.
Problems
Suppose there are many errors are getting logged at a single point of time and all are reading and appending the array then writing it back to the log file then many error won't be logged for sure.
It is consuming and wasting to much cpu and ram energy.
Please suggest a way to append new objects in the existing file without overwriting it.
Many thanks for your help you may offer.
Per Apple Documentation, you can open a file (output stream) in append mode.
Given you hold a reference to file output stream outStream, you can use below method to append data:
[NSJsonSerialization writeJSONObject:myNewObject toStream:outStream options:1 error:&error]
However, I would personally use the option you are already doing - read the data in mutable object, modify and then use NSJSONSerialization to convert it back to data again. Finally, save that data to disk - replacing the original. As this keeps JSON structure intact.
I have an NSData that was created by using NSKeyedArchiver. Is there a way to iterate over all the values inside it? It must somehow be possible to get all the keys that were stored in it when using +[NSKeyedUnarchiver unarchiveObjectWithData:].
Thanks
A NSKeyedArchived file "simply" is a property list. You would need to find out the structure of that plist, though.
I found the source code of Cocotron very helpful one day, as I tried to decode some NSKeyedUnarchived data: http://code.google.com/p/cocotron/source/browse/Foundation/NSKeyedArchiving/NSKeyedUnarchiver.m (Maybe look at line 39 (initForReadingWithData:) which is called by unarchiveObjectWithData: (line #164)).
Maybe you can find out more about the archived objects that way.
I'm a noob right now with pygame and I was wondering how to load a textfile, then strip that into a a single line. I believe that i would need to use the .rstrip('/n') function on my variable with the openned text file. But now, how do I turn this into a list? If I intentionally used two colons (::) to separate between my relevant pieces of information in the text file, how do I make it into a list with each list index being the contents in between two sets of ::? The purpose is to create save files in a menu GUI when closed, so is there a simpler way to save and open the contents of variables from one instance of the program to the next?
>>> "foo::bar::baz".split("::")
['foo', 'bar', 'baz']
If you just want to save structured data, however, you might want to look at either the pickle or json libraries. Both of them give ways to dump Python objects to files and then load them back out again.
I started working on a new home project, I need to index specific files names with there paths.
The program will index files on my local hard-disk with no need to deal with the contents of the files (so I assuming/hoping it would be simple implementation).
At first the user will insert a list of file extensions to get indexed (During setup time).
Then the program will run and create the data structure holding the path for the specific file entered by the user.
Retrieving data from by data structure would look like this:
path of the file on my HDD=function(filename entered by user)
I thought about it for quite a while and wrote a design for the data structure here is my suggestion
(Design Illustration):
I'll use an array with a hash function for mapping extension to a cell (Each cell presents the
first letter of the extension file).
inside each cell there would be a list of extensions starting with the same letter.
for each node in the list there would be a red black tree for searching the filename and then
after we found the filename the program will retrieve the path of the file stored in the tree
node.
Oh by the way usually I program in c (low level) or in c++.
I think you are making a way too elaborate and complicated scheme. If locating a MyFileTree based on extension is what you want then just use SortedDictionary<string, MyFileTree> where string is your extension and you'll get a O(log n) retrieval mechanism out of the box.