I run a service that creates many SQLITE3 databases and later on removes them again, they live for about a day maybe. They all have the same schema and start empty.
I use this to create a new blank SQLITE3 database:
sqlite3 newDatabase.db < myschema.sql
The myschema.sql file contains three table schemas or so, nothing fancy, no data. When I execute the above command on my rather fast, dedicated linux server, it takes up to 5 minutes to complete. There's processes running in the background, like a couple of PHP scripts using CPU time, but everything else is fast, like other commands or inserting data into the DB later on. It's just the creation that takes forever.
This is so weird, I have absolutely no idea what's wrong here. So my only resort right now is to create a blank.db once, and just make a fresh copy from that rather than importing from a SQL schema.
Any idea what I messed up? I initially thought the noatime settings on linux were messing with it, but no, disabling didn't change anything.
Willing to provide any configuration/data you need.
Edit:
This is what strace hangs at:
12:29:45.460852 fcntl(3, F_SETLK, {type=F_WRLCK, whence=SEEK_SET, start=1073741824, len=1}) = 0
12:29:45.460965 fcntl(3, F_SETLK, {type=F_WRLCK, whence=SEEK_SET, start=1073741826, len=510}) = 0
12:29:45.461079 lseek(4, 512, SEEK_SET) = 512
12:29:45.461550 read(4, "", 8) = 0
12:29:45.462639 fdatasync(4
A possibility is to use the strace command to see what's happens :
strace -f -s 1000 -tt sqlite3 newDatabase.db < myschema.sql
If it hangs somewhere, you will see.
Is your schema is HUGE ?
NOTE
in case of questioning about too high I/O disk, try the command iotop -oPa, you will see "who's" put the mess in your system
Alright I think I figured out the issue.
The server was running a couple of PHP scripts in the background that seemed to somewhat behave with CPU load, it often spiked at 100%, but all other commands worked fine mostly except installing things through APT-GET and installing new SQLITE3 databases out of a schema.
What probably caused the problem was the heavy disk access (IO operations). I re-installed APC, upgraded to a new version, figured out it was disabled for CLI (which is a default) but enabled it since I have long-running scripts, also added a few usleep(100) here and there.
I stopped every single PHP command, basically killed every program not required. Checked system usage through MYSQL Workbench, it seemed still very high until I realized that this is an average value. If you wait another 10 minutes, it will average out, which in my case was close to 0% LOAD. Perfect.
Then I restarted the scripts and it seems to be holding things down now.
I tried the SQLITE3 command mentioned above and it worked instantly, as expected.
So, simple cause: Not only high CPU load, but also HEAVY IO, disk access.
Related
I am running SQLite 3.7.9 on a Ubuntu virtual machine (VirtualBox). I have a db called "frequency" on which I am trying to create a view:
CREATE VIEW myview AS SELECT * FROM frequency;
On running this I get the error "Error:database is locked.". (The actual view I am trying to create is more complex, but even this simple one won't work).
From what I have read online, this error usually occurs either due to 1) concurrent access, which I do not think is happening here as the db file is on the cirtual machine, or 2) running a CREATE/DROP command when a SELECT command has not finished - I do not think I am doing this either since my query is so basic.
I have also verified from ls -l that I have read-write permissions for both the file and the directory holding it.
Any help would be appreciated.
Try listing all the processes you have running, make sure you don't have two database processes running at once. Although the conditions were different, with me running a Windows VM, this fixed it.
So I am trying to keep my Node server on a embedded computer running when it is out in the field. This lead me to leveraging inittab's respawn action. Here is the file I added to inittab:
node:5:respawn:node /path/to/node/files &
I know for a fact that when I startup this node application from command line, it does not get to the bottom of the main body and console.log "done" until a good 2-3 seconds after I issue the command.
So I feel like in that 2-3 second window the OS just keeps firing off respawns of the node app. I see in the error logs too in fact that the kernel ends up killing off a bunch of node processes because its running out of memory and stuff... plus I do get the 'node' process respawning too fast will suspend for 5 minutes message too.
I tried wrapping this in a script, dint work. I know I can use crontab but thats every minute... am I doing something wrong? or should I have a different approach all together?
Any and all advice is welcome!
TIA
Surely too late for you, but in case someone else finds such a problem: try removing the & from the command invocation.
What happens is that when the command goes to the background (thanks to the &), the parent (init) sees that it exited, and respawns it. Result: a storm of new instantations of your command.
Worse, you mention embedded, so I guess you are using busybox, whose init won't rate-limit the respawning - as would other implementations. So the respawning will only end when the system is out of memory.
inittab is overkill for this. I found out what I need is a process monitor. I found one that is lightweight and effective; it has some good reports of working great out in the field. http://en.wikipedia.org/wiki/Process_control_daemon
Using this would entail configuring this daemon to start and monitor your Node.js application for you.
That is a solution that works from the OS side.
Another way to do it is as follows. So if you are trying to keep Node.js running like I was, there are several modules written meant to keep other Node.js apps running. To mention a couple there are forever and respawn. I chose to use respawn.
This method entails starting one app written in Node.js that uses the respawn module to start and monitor the actual Node.js app you were interested in keeping running anyway.
Of course the downside of this is that if the Node.js engine (V8) goes down altogether then both your monitoring and monitored process will go down with it :-(. But its better than nothing!
PCD would be the ideal option. It would go down probably only if the OS goes down, and if the OS goes down then hope fully one has a watchdog in place to reboot the device/hardware.
Niko
I have a PHP script that seemed to stop running after about 20 minutes.
To try to figure out why, I made a very simple script to see how long it would run without any complex code to confuse me.
I found that the same thing was happening with this simple infinite loop. At some point between 15 and 25 minutes of running, it stops without any message or error. The browser says "Done".
I've been over every single possible thing I could think of:
set_time_limit ( session.gc_maxlifetime in the php.ini)
memory_limit
max_execution_time
The point that the script is stopped is not consistent. Sometimes it will stop at 15 minutes, sometimes 22 minutes.
Please, any help would be greatly appreciated.
It is hosted on a 1and1 server. I contacted them and they don't provide support for bugs caused by developers.
At some point your browser times out and stops loading the page. If you want to test, open up the command line and run the code in there. The script should run indefinitely.
Have you considered just running the script from the command line, eg:
php script.php
and have the script flush out a message every so often that its still running:
<?php
while (true) {
doWork();
echo "still alive...";
flush();
}
in such cases, i turn on all the development settings in php.ini, of course on a development server. This display many more messages, including deprecation warnings.
In my experience of debugging long running php scripts, the most common cause was memory allocation failure (Fatal error: Allowed memory size of xxxx bytes exhausted...)
I think what you need to find out is the exact time that it stops (you can set an initial time and keep dumping out the current time minus initial). There is something on the server side that is stopping the file. Also, consider doing an ini_get to check to make sure the execution time is actually 0. If you want, set the time limit to 30 and then EVERY loop you make, continue setting at 30. Every time you call set_time_limit, the counter resets and this might allow you to bypass the actual limits. If this still isn't working, there is something on 1and1's servers that might kill the script.
Also, did you try the ignore_user_abort?
I appreciate everyone's comments. Especially James Hartig's, you were very helpful and sent me on the right path.
I still don't know what the problem was. I got it to run on the server with using SSH, just by using the exec() command as well as the ignore_user_abort(). But it would still time out.
So, I just had to break it into small pieces that will run for only about 2 minutes each, and use session variables/arrays to store where I left off.
I'm glad to be done with this fairly simple project now, and am supremely pissed at 1and1. Oh well...
I think this is caused by some process monitor killing off "zombie processes" in order to allow resources for other users.
Run the exec using "2>&1" to log anything including stderr.
In my output I managed to catch this:
...
script.sh: line 4: 15932 Killed php5-cli -d max_execution_time=0 -d memory_limit=128M myscript.php
So something (an external force, not PHP itself) is killing my process!
I use IdWebSpace which is excellent BTW but I think most shared hosting providers impose this resource/process control mechanism just to be sane.
I just executed installutil on a DLL in which custom performance counters are installed. I installed 2 categories but then realized I had an issue with the first category, so I deleted the category but before deleting I ran an asp.net app against to make sure it was working.
The issue is after deleting the category and then recreating the application is logging to the custom perfmon counter but the values never get updated.
The second custom category works fine and counter is getting populated. I can see both categories within perfmon but noticed that the first category counters never get updated when running an asp.net against it.
Has anyone run into this issue. Do I need to delete the existing instance? I'm trying to avoid a reboot of the machine.
depending on how you install the counter, (assuming transacted installation let's say...) perf counters can get "orphaned".
IMHO this is because perf counters seem to get installed in the Reg and "elsewhere" <--still trying to find out where else perf counter info gets stored.
In some cases, the regkeys get built appropriately and so register as appropriate but the OS "elsewhere" location is not properly built out. It's almost like there is a perfcounter cache somewhere. ( comments anyone?)
So in summary after installation run lodctr /R from the commandline with the appropriate perms and this "seems" to solve the issue for most installations. I would be interested to see what others say about this as the generally available documentation (i.e. MS) SUCKS beyond belief on this topic...
grrr.
I'd like to have a job that runs nightly, or even just once a week, that generates a script of our dev databases. They tend to be tinkered with, and developers have a habit of making changes without scripting them, or documenting them.
I'd like to create a job that will essentially mimic what happens when I right-click and do Tasks > Generate Scripts. It would mean that in the event of Something Bad Happening, we're able to rebuild the structure (the content is 'generatable'), and be back up and running without having to restore from backups that may have been lost at the same time as Something Bad Happening.
I've read about sqlpubwiz, but I couldn't find it on the dev machine, only on my local machine, where I've only got the client tools installed. Am I going down the right route?
I'd suggest a different approach that has worked well for me. Run a nightly job that drops the development databases, restores them from a known configuration, and then applies all the change scripts that have been committed to source control.
Advantages of this approach:
Your change scripts are tested every night
There are no unscripted database changes
Developers quickly learn to create change scripts and commit them to source control
When I've taken this approach I've used the latest production backup for the restore source. This introduces uncertainty, because data changes in production can cause unexpected things to happen, but works well if you need to rapidly respond to production issues.
Database Publishing Wizard which can be run from the command line.
APEXSQL Script - use the command line version - simply check into version control or whatever