Realm objective c - really huge db file size - 64GB - objective-c

We have recently planned to switch from SQLite to Realm in macOS and iOS app due to db file corruption issue with SQLite so we first started with macOS app. All coding changes were smooth and app started working fine.
Background about app and DB usage - app really uses DB very heavily and performs too many read and writes to DB in each minute and saves big xml's to it. In each minute it writes / updates around 10-12 records (at max) with xml and reads 25-30records too. After each read it deletes data along with xml from database and my expectation is once data is deleted it should free up space and reduce file size but looks like it is growing continuously.
To test the new DB changes we kept app running app 3-4 days and DB file size went to 64.42GB and app started being slow. Please refer the attached screen shot.
To further debug, I started app with new DB file and size was 4KB but within 5min it goes to 295KB and never reduced in size even records were continuously added and deleted.
To further clarify, app uses NSThreads to perform various operations and those threads writes and reads data to DB but with proper begin\commit transactions. I also read at 'Large File Size' at https://realm.io/docs/java/latest/#faq and tried to find compactRealm but can't find it in objective c.
Can anybody please advise.
Update - I Give up on Realm
After 15days of efforts, Finally I have stopped usage of Realm and starting to fix/workaround db file corruption issue with SQLite. Realm Huge DB file issue was fixed by making changes to code for threads but then I started getting Too many open files error after running the app for 7-8 hours.
Debugged for whole week and made all possible changes and at some point looks like all was good as xcode were not showing any open files. But again I started getting Too many open files crash and then debugged with instrument and found there were so many open files to realm database, lock, commit and cv files.
I am sure there are no leaks in app and xcode also does not shows those open files in Disk usage as well. I decided to invoke lsof command in code before and after Realm calls and most of the it doesn't increase open file count but sometime n between it increases. In My app it started from 120 files to 550 in around 6 hours. Xcode looks all fine via Disk usage but instrument shows open files.
No good support from Realm team, sent email to them, just got one response. Made many changes to code following their suggestions and doesn't work at all so gave up on it. I think it's good for small apps only.

Related

Should I use AsyncStorage for large amounts of data?

We are wanting to implement an offline mode for our react-native application. We will be working with quite large amount of data (aprox. 40-50mb). It is an array of aprox. 16000 objects.
As far as I know, there are two ways to save this data.
Using AsyncStorage - android has a limit of 6mb, but I've read somewhere, that it can be increased.
Using json file - Downloading that data as json file using react-native-background-downloader and then using react-native-fs to save it and load it if the user has no connection to internet.
Personally I think that the second option is better, even though it requires permission to file storage.
Am I missing any other factors to consider? Are there any other options for offline access?
In the end opted out for usage of the json file as there is limit on android. On load of the application I take these data and load them into variable in mobX store. Which functions same as any variable.
I was afraid that mobile phones will have problem sorting across the 16000 objects in array, but there have been no reports of this thing going wrong so far. (In production for 4-5 months right now)
So basically when you hit "enable offline mode" I ask for the file storage permission and download the file using react-native-fs.
Then on the next startup of the application I just read the data off the JSON file.

A persistent simple data storage for Node.JS app implementation?

I'm planning to launch a simple Node.JS utility and push it to heroku. A fire and forget solution, will sleep for like 90% of the time probably. Unfortunately it seems that I require a persistent data storage for my purposes (heroku apps get rebooted daily and storing everything in RAM is unrealistic), and I don't know which way to look as:
Most SQL hostings are paid / limited time free / require constant refreshing ( like freemysqlhosting ).
Storing stuff in plain .txt format is seemingly hard to implement, besides git always overwrites the contents of a tracked .txt file, and leaving it untracked disposes of it on heroku and leads to ENOENT No such file error. Yeah, I tried.
So, the question is - how do I implement a simple and built in solution for storing data? Are there any relevant typical solutions? It's going to be equivalent to just 1 SQL table.
As you can see, you can answer this on many levels - maybe suggest a free deploy and forget SQL hosting (it obviously has to support external connections), maybe tell me how to keep a file tracked in git without actually replacing all of its content with every commit, maybe suggest some module to install. I hope this is not too broad..

Does WIndows Search(Win 2008R2)/Indexing Service(Win2003) has any impact on Directory.GetFiles(searchPattern, SearchOption.AllDirectories) method?

We are having a strange issue with Directory.GetFiles method trying search for a Word Document from a UNC Folder Share (NTFS Disk) on a Win2008R2 VMServer. The share contain over 10K Files in the Parent Folder and 75K Files in a SubDirectory.
It was all working fine in Win2003 Server. When migrated to Win2008R2 Server, the WinForms application freezes over this method and taking almost 13 minutes to Open a single File from a Client machine connected to the File Share via a VPN Network that has Download Speed bandwidth of 1Mbps (not throughput).
After search & research, we realized the Windows Search service was not turned on and the Service was started and the share was indexed. We saw a performance improvement where the time taken to open a file using GetFiles Method came down to 3 Minutes from 13 minutes.
But this is not consistent. During day time when bandwidth is much lower than 1MBPS (say 0.5 MBPS) the time-span to open the document is again between 8-12 minutes.
At this point we are not sure of which one is causing the problem?
Not possible solutions:
1) Creating multiple directories and organizing files.
2) Increasing bandwidth.
3) Using direct filepath instead of Directory.GetFiles/EnumerateFiles
Any help is highly appreciated. Thanks!
Oh yeah, good stuff. You will notice that even if the service is off, running it twice (within a short tiem of each other) will run much faster the second time. Actually, here is a good one for you, run it twice, let the first one run for a minute. The second one will catch up the first one almost immediately and then they will both be at the same spot for the rest of the time. (if what I said makes sense).
Here is what is happening, GetFiles() and GetDirectories() do use the indexing service. Also, if your indexing service is off, this just means it will not automatically get data about the files, but when you access the file (windows explorer / GetFiles) it will index them, so that if you ask for the same thing with a set amount of time, it wont have to query the Hard-Drives' Table-Of-Contents again.
As far as it running faster and slower when the indexing service is on, this is because windows knows it cannot keep track of every file on the computer. Therefore, after a set amount of time, the file is considered stale and the indexing service will do an IO call to get the info to update the index database, when you ask about the file.
This wiki talks about it, just a little. Not very thorough.

SQLite database image getting malformed at build time in Xcode

I'm working on a iOS 6 app, and am using a SQLite database to store data. At startup the app does a select on the database and displays the results on the first screen.
However I've started to get an "Database disk image is malformed" error when trying to run the select.
The strange thing is that I can use SQLite browser, http://sqlitebrowser.sourceforge.net/, to do the select on the database in the project folder. But if I try to open the database after it has been copied to the simulator folder, /Users//Library/ApplicationSupport/iPhone simulator/6.0/Applications/... then I get a disk image is malformed.
The database is not being accessed by a background thread, nor am I using breakpoints to halt execution as is being suggested as a reason here sqlite database disk image malformed on iPhone SDK.
No more than one execution being done on the database at once.
All hints, tips and possible solutions are appreciated.
I've found out that this had something to do with me bundling a rather large database file with the app (larger than 3 GB). I've fixed the problem by doing a in-app download of the data over wifi. This caused the database error to fade away.

sqlite unclosed read transaction

I'm using sqlite3 in an iOS application and I've encountered a very strange issue multiple
times.
I'm using WAL and all of my writes happen on a managed thread that only allows 1 operation at a time and my reads use a different database handle and everything works fine. The issue I'm seeing is sometimes my read handle gets into this weird state where it won't read committed data. It's like it has an uncommitted read transaction...
I can write successfully to the database and I've exported my results to my computer where I see the newly written results just fine. However, my reads seem to access the database at an older point in time...it's like they're stuck. If I close the application and reopen it, they're fine and they read the newly committed data, but I'm wondering how my application is getting stuck in this state.
ANY help would be appreciated. Thanks in advance.
I ran into the exact same issue but on Debian Lenny (kernel 2.6.33) w/ SQLite3 3.7.7.1.
It turned out that I got some processes hanging on to some stale DB file descriptors after I deleted and recreated a database file.
I fixed it by making sure all processes that used the DB were killed before recreating the DB again.
By getting rid of the old processes, the file handles were deleted as well.