rufus scheduler exception - ruby-on-rails-3

This is driving me a bit nuts, I have rufus doing some scheduling to call a rules engine (ruleby). So most work I have running is inside the running engine and then inside the scheduler. As a result when I have a error the information is a bit limited.
Fast forward, Im still working on my code but now I have this exception error:
'undefined method `+' for nil:NilClass'
It wasnt happening before, Im not sure exactly when it started and if it was what I was doing with the code or some events that came in that come in via http push. I comment out the code I think is causing it, stops happening, I put the code back in, still not happening, I leave it for a while, starts happening again. I try and run the engine manually outside the scheduler (so just once instead of every x many minutes), doesnt happen.
Put it back on the scheduler to run a few times, starts happening again. I would google the above error but google doesnt love the + in the search. Anyone have any ideas where to direct me to for this? Its clearly something happening when the rules engine is running but it was more than happily running for weeks before i got back to trying to finish it off. Best thought is that its during the rules engine running it passes events into it one at a time and something is missing that wasnt before.
Really want to know what the + method it refers to is/could be/suppose to be.

Related

Lack of Log Entry for Unhandled Error in Server Side SuiteScript 2.x

I suppose that this is more of a curiosity as opposed to an actual issue, but I thought I'd ask about it anyway. There are times when an uncaught error occurs in a server-side NetSuite script using SuiteScript 2.0/2.1 (2.x), but instead of seeing a "SYSTEM" scripting log entry, there's nothing. It gives the appearance of a script just stopping for no reason. Now, I know this can easily be avoided by wrapping everything within a try-catch block, but that's not what I'm trying to discuss here.
Does anyone have any insight into why a script would just stop without any SYSTEM error logging? It's just something I find interesting given that with the 1.0 API uncaught errors would always get logged. And it's not a guarantee that an uncaught error won't be logged as a SYSTEM log. It seems more common with map/reduce scripts, but unless memory is not serving correctly I believe that I have seen it happen with suitelets and user event scripts, too.
Just thought that I'd pose the question here to see if there was anyone who might know a little something about it.
This is actually covered in the system help for Map/Reduce scripts. They do fail silently. I've not seen this in any other script type.

Intermittent errors with new code release to existing Mojolicious app

I'm having a problem with an existing Mojolicous app. I have added some new routes, views, controllers, and models, and am returning database results to view using Rose::DB::Object ORM.
I updated the production version today with code that had been working great on Morbo. But, on apache2/plack/psgi mod_perl config the new Models are only returning the queries about 1 in 5 sometimes 1 in 10 times.
I've eliminated a number of variables, like I can query the database directly and get my results no problem. Older Model's and their queries always work.
It appears that only this new functionality is intermittent. I have narrowed requests to only one server and have restarted apache. But, am now at the point that I don't understand why the issue is persisting.
I think this is some kind of mod_perl wonky behavior, but don't know why an apache restart doesn't fix it.
Any help or ideas would be awesome.
I did finally resolve this and it turned out to be something simple. I was missing the use statement for my controller in my main application script. The script where I setup the routes. I'm not sure I understand why it was working intermittently in production and working all the time in development. But, once I added:
use TheApp::Controller::Tags; in the main app pm it worked consistently.
In retrospect it feels like I should have figured it out sooner, but the fact that it worked part of the time led me down the wrong path. Hopefully this will help someone else sometime.

inittab respawn of Node.js too fast

So I am trying to keep my Node server on a embedded computer running when it is out in the field. This lead me to leveraging inittab's respawn action. Here is the file I added to inittab:
node:5:respawn:node /path/to/node/files &
I know for a fact that when I startup this node application from command line, it does not get to the bottom of the main body and console.log "done" until a good 2-3 seconds after I issue the command.
So I feel like in that 2-3 second window the OS just keeps firing off respawns of the node app. I see in the error logs too in fact that the kernel ends up killing off a bunch of node processes because its running out of memory and stuff... plus I do get the 'node' process respawning too fast will suspend for 5 minutes message too.
I tried wrapping this in a script, dint work. I know I can use crontab but thats every minute... am I doing something wrong? or should I have a different approach all together?
Any and all advice is welcome!
TIA
Surely too late for you, but in case someone else finds such a problem: try removing the & from the command invocation.
What happens is that when the command goes to the background (thanks to the &), the parent (init) sees that it exited, and respawns it. Result: a storm of new instantations of your command.
Worse, you mention embedded, so I guess you are using busybox, whose init won't rate-limit the respawning - as would other implementations. So the respawning will only end when the system is out of memory.
inittab is overkill for this. I found out what I need is a process monitor. I found one that is lightweight and effective; it has some good reports of working great out in the field. http://en.wikipedia.org/wiki/Process_control_daemon
Using this would entail configuring this daemon to start and monitor your Node.js application for you.
That is a solution that works from the OS side.
Another way to do it is as follows. So if you are trying to keep Node.js running like I was, there are several modules written meant to keep other Node.js apps running. To mention a couple there are forever and respawn. I chose to use respawn.
This method entails starting one app written in Node.js that uses the respawn module to start and monitor the actual Node.js app you were interested in keeping running anyway.
Of course the downside of this is that if the Node.js engine (V8) goes down altogether then both your monitoring and monitored process will go down with it :-(. But its better than nothing!
PCD would be the ideal option. It would go down probably only if the OS goes down, and if the OS goes down then hope fully one has a watchdog in place to reboot the device/hardware.
Niko

Sqldependency:Query Notifications not receiving properly

We are using SQldependency for Query Notifications in our program. In our webserver we have database from which we send query notification. The application runs in our office PC. It works fine and I do receive instant notification on any changes in table. But for last few days sometimes I get error. The application is does not get any notification even though changes are there. The problem comes at random time. I also confirmed the internet connection by using continuous ping to the server and it was proper. Reopening my application solved the problem temporarily. But I want to know what may be reason for this problem. How can I troubleshoot it.
I don't think anyone can guess what is wrong with your deployment, w/o any info. My advice is to read The Mysterious Notification, Troubleshooting Query Notifications and Troubleshooting Dialogs. With a better understand of how it works and what to look for, perhaps you can diagnose the issue.

PHP script stops running arbitrarily with no errors

I have a PHP script that seemed to stop running after about 20 minutes.
To try to figure out why, I made a very simple script to see how long it would run without any complex code to confuse me.
I found that the same thing was happening with this simple infinite loop. At some point between 15 and 25 minutes of running, it stops without any message or error. The browser says "Done".
I've been over every single possible thing I could think of:
set_time_limit ( session.gc_maxlifetime in the php.ini)
memory_limit
max_execution_time
The point that the script is stopped is not consistent. Sometimes it will stop at 15 minutes, sometimes 22 minutes.
Please, any help would be greatly appreciated.
It is hosted on a 1and1 server. I contacted them and they don't provide support for bugs caused by developers.
At some point your browser times out and stops loading the page. If you want to test, open up the command line and run the code in there. The script should run indefinitely.
Have you considered just running the script from the command line, eg:
php script.php
and have the script flush out a message every so often that its still running:
<?php
while (true) {
doWork();
echo "still alive...";
flush();
}
in such cases, i turn on all the development settings in php.ini, of course on a development server. This display many more messages, including deprecation warnings.
In my experience of debugging long running php scripts, the most common cause was memory allocation failure (Fatal error: Allowed memory size of xxxx bytes exhausted...)
I think what you need to find out is the exact time that it stops (you can set an initial time and keep dumping out the current time minus initial). There is something on the server side that is stopping the file. Also, consider doing an ini_get to check to make sure the execution time is actually 0. If you want, set the time limit to 30 and then EVERY loop you make, continue setting at 30. Every time you call set_time_limit, the counter resets and this might allow you to bypass the actual limits. If this still isn't working, there is something on 1and1's servers that might kill the script.
Also, did you try the ignore_user_abort?
I appreciate everyone's comments. Especially James Hartig's, you were very helpful and sent me on the right path.
I still don't know what the problem was. I got it to run on the server with using SSH, just by using the exec() command as well as the ignore_user_abort(). But it would still time out.
So, I just had to break it into small pieces that will run for only about 2 minutes each, and use session variables/arrays to store where I left off.
I'm glad to be done with this fairly simple project now, and am supremely pissed at 1and1. Oh well...
I think this is caused by some process monitor killing off "zombie processes" in order to allow resources for other users.
Run the exec using "2>&1" to log anything including stderr.
In my output I managed to catch this:
...
script.sh: line 4: 15932 Killed php5-cli -d max_execution_time=0 -d memory_limit=128M myscript.php
So something (an external force, not PHP itself) is killing my process!
I use IdWebSpace which is excellent BTW but I think most shared hosting providers impose this resource/process control mechanism just to be sane.