I'm using sqlite3 in an iOS application and I've encountered a very strange issue multiple
times.
I'm using WAL and all of my writes happen on a managed thread that only allows 1 operation at a time and my reads use a different database handle and everything works fine. The issue I'm seeing is sometimes my read handle gets into this weird state where it won't read committed data. It's like it has an uncommitted read transaction...
I can write successfully to the database and I've exported my results to my computer where I see the newly written results just fine. However, my reads seem to access the database at an older point in time...it's like they're stuck. If I close the application and reopen it, they're fine and they read the newly committed data, but I'm wondering how my application is getting stuck in this state.
ANY help would be appreciated. Thanks in advance.
I ran into the exact same issue but on Debian Lenny (kernel 2.6.33) w/ SQLite3 3.7.7.1.
It turned out that I got some processes hanging on to some stale DB file descriptors after I deleted and recreated a database file.
I fixed it by making sure all processes that used the DB were killed before recreating the DB again.
By getting rid of the old processes, the file handles were deleted as well.
Related
I'm running a process that updates flags in a SQL Server database table. Essentially, the graph reads a .csv file, then uses the variables in the update statement. The universal reader is completing but the DBOutputTable component is hanging and won't complete. The funny thing is that earlier in the graph there's another DBOutputTable component that does almost the exact same thing and finishes successfully. Does anyone know what the issue could be?
I've restarted the services and the server itself. This process typically completes without issue but it just started hanging a few days ago.
I would guess non existing or not sufficient index on that table. That would manifest after a while, with larger data set.
Double check that previous DBOutputTable is doing update on same fields.
We have recently planned to switch from SQLite to Realm in macOS and iOS app due to db file corruption issue with SQLite so we first started with macOS app. All coding changes were smooth and app started working fine.
Background about app and DB usage - app really uses DB very heavily and performs too many read and writes to DB in each minute and saves big xml's to it. In each minute it writes / updates around 10-12 records (at max) with xml and reads 25-30records too. After each read it deletes data along with xml from database and my expectation is once data is deleted it should free up space and reduce file size but looks like it is growing continuously.
To test the new DB changes we kept app running app 3-4 days and DB file size went to 64.42GB and app started being slow. Please refer the attached screen shot.
To further debug, I started app with new DB file and size was 4KB but within 5min it goes to 295KB and never reduced in size even records were continuously added and deleted.
To further clarify, app uses NSThreads to perform various operations and those threads writes and reads data to DB but with proper begin\commit transactions. I also read at 'Large File Size' at https://realm.io/docs/java/latest/#faq and tried to find compactRealm but can't find it in objective c.
Can anybody please advise.
Update - I Give up on Realm
After 15days of efforts, Finally I have stopped usage of Realm and starting to fix/workaround db file corruption issue with SQLite. Realm Huge DB file issue was fixed by making changes to code for threads but then I started getting Too many open files error after running the app for 7-8 hours.
Debugged for whole week and made all possible changes and at some point looks like all was good as xcode were not showing any open files. But again I started getting Too many open files crash and then debugged with instrument and found there were so many open files to realm database, lock, commit and cv files.
I am sure there are no leaks in app and xcode also does not shows those open files in Disk usage as well. I decided to invoke lsof command in code before and after Realm calls and most of the it doesn't increase open file count but sometime n between it increases. In My app it started from 120 files to 550 in around 6 hours. Xcode looks all fine via Disk usage but instrument shows open files.
No good support from Realm team, sent email to them, just got one response. Made many changes to code following their suggestions and doesn't work at all so gave up on it. I think it's good for small apps only.
I have been working on building a new database. I began by building the structure within the database it is replacing and populating this as I created each set of tables. Once I had made additions I would drop what had been created and execute the code to build the structure again and a separate file to insert the data. I repeated this until the structure and content was complete to ensure each stage was as I intended.
The insert file is approximately 30mb with 500,000 lines of code (I appreciate this is not the best way to do this but for various reasons I cannot use alternative options). The final insert completed and took approximately 30 minutes.
A new database was created for me, the structure executed successfully but the data would not insert. I received the first error message shown below. I have looked into this and it appears I need to use the sqlcmd utility to get around this, although I find it odd as it worked in the other database which is on the same sever and has the same autogrow settings.
However, when I attempted to save the file after this error I received the second error message seen below. When I selected OK it took me to my file directory as it would if I selected Save As, I tried saving in a variety of places but received the same error.
I attempted to copy the code into notepad to save my changes but the code would not copy to the clipboard. I accepted I would lose my changes and rebooted my system. If I reopen this file and attempt to save it I receive the second error message again.
Does anyone have an explanation for this behaviour?
Hm. This looks more like an issue with SSMS and not the SQL Server DB/engine.
If you've been doing few times, possibly Management Studio ran out of RAM?
Have you tried breaking INSERT into batches/smaller files?
I run a service that creates many SQLITE3 databases and later on removes them again, they live for about a day maybe. They all have the same schema and start empty.
I use this to create a new blank SQLITE3 database:
sqlite3 newDatabase.db < myschema.sql
The myschema.sql file contains three table schemas or so, nothing fancy, no data. When I execute the above command on my rather fast, dedicated linux server, it takes up to 5 minutes to complete. There's processes running in the background, like a couple of PHP scripts using CPU time, but everything else is fast, like other commands or inserting data into the DB later on. It's just the creation that takes forever.
This is so weird, I have absolutely no idea what's wrong here. So my only resort right now is to create a blank.db once, and just make a fresh copy from that rather than importing from a SQL schema.
Any idea what I messed up? I initially thought the noatime settings on linux were messing with it, but no, disabling didn't change anything.
Willing to provide any configuration/data you need.
Edit:
This is what strace hangs at:
12:29:45.460852 fcntl(3, F_SETLK, {type=F_WRLCK, whence=SEEK_SET, start=1073741824, len=1}) = 0
12:29:45.460965 fcntl(3, F_SETLK, {type=F_WRLCK, whence=SEEK_SET, start=1073741826, len=510}) = 0
12:29:45.461079 lseek(4, 512, SEEK_SET) = 512
12:29:45.461550 read(4, "", 8) = 0
12:29:45.462639 fdatasync(4
A possibility is to use the strace command to see what's happens :
strace -f -s 1000 -tt sqlite3 newDatabase.db < myschema.sql
If it hangs somewhere, you will see.
Is your schema is HUGE ?
NOTE
in case of questioning about too high I/O disk, try the command iotop -oPa, you will see "who's" put the mess in your system
Alright I think I figured out the issue.
The server was running a couple of PHP scripts in the background that seemed to somewhat behave with CPU load, it often spiked at 100%, but all other commands worked fine mostly except installing things through APT-GET and installing new SQLITE3 databases out of a schema.
What probably caused the problem was the heavy disk access (IO operations). I re-installed APC, upgraded to a new version, figured out it was disabled for CLI (which is a default) but enabled it since I have long-running scripts, also added a few usleep(100) here and there.
I stopped every single PHP command, basically killed every program not required. Checked system usage through MYSQL Workbench, it seemed still very high until I realized that this is an average value. If you wait another 10 minutes, it will average out, which in my case was close to 0% LOAD. Perfect.
Then I restarted the scripts and it seems to be holding things down now.
I tried the SQLITE3 command mentioned above and it worked instantly, as expected.
So, simple cause: Not only high CPU load, but also HEAVY IO, disk access.
My application is keep crashing first or second time with the error "abc.sqlite is corrupted. SQLite error code:11, 'database disk image is malformed', NSSQLiteErrorDomain=11"
I am unable to track it . anyone Plz help
thanks
(Taken from one of the comments above)
The app was crashing because it was loading on a different thread, app tried to retrieve data before the database was even installed.
This is not entirely true, I ran into same issue today on my iphone, hookup to macbook and use xcode to bring db from iphone to macbook. I used SQLite DB browser, ran PRAGMA integrity_check, it shows an error on one of the pages with code 11. Luckily my table only has 10 records. Wierd thing was when I run "select * from tableA", only 3 records come back. I was able to fix the database accidentally by re-number of some record ID, when saving changes, those missing records shows up mysteriously.. while the corrupted record disappeared.