execute system command on rails - not working in production - ruby-on-rails-3

On development everything works great. On production however, this line of code in a controller is no working:
output = `mclines #{paramFileName} #{logFileName} #{outputFileName}`
where mclines is a c program, and the rest are names of files. mclines is not executed on the production server, but it does on my laptop. I have no idea about what to fix. Have been trying different things for hours, but the truth is that I'm quite lost. In production the ssl in on, that's the only major difference.
If I execute the command on the shell, it gets executed. When I say it doesn't gets executed is because the first thing it should do is print some info in a file, and it doesn't. The server -as my laptop- is running ubuntu, but I have no idea about what logs could be usefull to read. systemlog had nothing usefull.
Any ideas that can lead to find the culprit are welcome.

Make sure mclines really exists on the production server, and use the full path to the mclines executable, as in
output = `/full/path/to/mclines #{paramFileName} #{logFileName} #{outputFileName}`.

Reference this
Try to print out your exit status code as:
$?.to_i
after the command...
or as pointed out in this link you can always use popen3/popen4 for better handling of input/output for system commands...

Related

How can an uncalled test affect another in Go?

I have a test function TestJobqueue() in https://github.com/VertebrateResequencing/wr/blob/develop/jobqueue/jobqueue_test.go that I can call in isolation: go test -tags netgo ./jobqueue -v -run 'TestJobqueue$'.
I recently started getting test failures related to boltdb (one of my dependencies) bombing out with signal SIGBUS: bus error code panics, or just normally failing tests because the database couldn't be opened. But only when working off an NFS mounted directory. Fair enough, I or boltdb have some kind of NFS-related bug.
But the thing I can't wrap my head around is that I only get these errors when an entirely different test function exists.
As per the comments in TestREST() in https://github.com/VertebrateResequencing/wr/blob/92fb61ccd7819c8f1edfa8cce8468c4250d40ea7/jobqueue/rest_test.go, if I call Serve(serverConfig) (a function in the package being tested, a function call which is made many times in TestJobqueue() and other test functions) in that test function, TestJobqueue() fails. If I don't, it doesn't.
In short, the failure of tests in one test function can be controlled by the value of a boolean in a test function that I'm not running.
How is this possible?
Edit: to address some points brought up by the first answer, TestJobqueue() is being run in isolation. No other test runs before or after it. If the database file already exists, Serve() results in those files being deleted first, then a new one created to run the new set of tests. The odd thing that I'm seeking an answer for is how an unexecuted function can have this side effect. I can demonstrate it is really unexecuted by beginning or ending TestREST() with a panic call: the output of that panic is never seen, but TestJobqueue() failure can still be controlled by the boolean in TestREST() (if the panic comes at the end).
Edit2: this turns out to be caused by an unusual thing I do in TestJobqueue(), which is to call go test on itself. Needless to say, if you do this, strange things can happen...
In short, the failure of tests in one test function can be controlled by the value of a boolean in a test function that I'm not running.
This is not a great summary. Your test starts a server. The other test starts a server, clearly, the problem is there. You appear to have commented out the bit of code that stops the server at the end of the test? You can't run two servers on the same port.
You probably have a port conflict or some network condition that is triggered by running the two servers at once, because they both appear to use a similar (identical?) config loaded like this:
config := internal.ConfigLoad("development", true)
Running with no config uses default values, avoiding the conflict, running with config causes the conflict. So to pin it down, try creating a config with one setting at a time till you find the config setting that causes the problem (most likely Port or WebPort). Alternatively, make sure the tests stop the server at the end.
[EDIT] Looks like you have narrowed it down to DBFile config setting by changing one at a time. This implies the server starts a new db instance - if both try to use the same file for a new db, this would cause contention and the second test to run would fail.
It's not entirely clear from your description above what you're doing or what the problem is, so you could try to improve that to state exactly the sequence of actions and the problem. If for example you have previously run a test which creates a db, it could affect later test runs because of the presence of a db file, so your tests are not completely independent.
[EDIT 2 - after further edits to question]
If commenting out TestREST completely solves your problem (or a panic before it starts), and given changing it breaks the other test, you are executing TestREST somehow.
Looking at your code for jobqueue_test, it appears to invoke go test so you might be running more tests that you assume? Given you don't see the panic output I'd suspect your use of exec.Command in this big test. Try removing bits of the failing test till it works to narrow down exactly which invocation is running the other test. Calling go test within a test is pretty unusual!
https://github.com/VertebrateResequencing/wr/blob/develop/jobqueue/jobqueue_test.go#L2445

Get a return value from a screen'd command

I'm running a process in a screen (on Ubuntu 13.10, if it matters). I can execute a command within that screen with:
screen -p 0 -X eval 'stuff \"$command\"\015'
I'm not 100% sure what this command is doing to begin with, though it's functioning correctly. The reason behind it is I'm running a Minecraft server (still) and this screens in to the correct screen, and throws the command on the running command line. So that's good, so far.
But what I'd like is to be able to run this command with a return value. So for example, if I were to run a "list" command, it'd tell me how many people and who is online, but I need to capture that output and put it somewhere.
Anyone know of a way to accomplish this? I can't tell the minecraft server command line to redirect the output somewhere else since it doesn't have direct command line access, so the only way I could do this would be to grab all output of the screen while I'm connected ... but I'm not sure if that's possible.
I think you may be able to view the logs? can you not view a running log of the server ?

Hiawatha CGI: writing to the client as soon as a command gets executed

I have a CGI script run by Hiawatha web server that needs to
return some data to the client,
do some system work (may take 20-30 seconds)
and then return yet more data.
So far I haven't been able to achieve this result: the script doesn't write data as the commands get executed, rather, it writes everything in a single shot when its execution ends. Is it even possible to achieve with Hiawatha what I described above? Thank you.
I figured it out: you need to set the WaitForCGI = yes option in the config file.

Making WSGI debug framework?

I'm trying to learn about using mod-wsgi, and I thought the best way would be for me to write my own simple 'debug' framework. I am NOT looking to use someone else's debug framework at this time.
The problem is, I'm not sure how to get started.
Specifically, I have a script working now where there is a WSGIAlias to my python script:
/testscript -> /home/bill/testscript.py [this works ok]
There are several annoying problems here, namely that if there is any syntax error of any kind, apache returns a 500 server error, and I have to check the server logs, which is annoying.
What I would like to do is to have some kind of framework called, that then encapsulates my script, this way when an error occurs (like a syntax error in testscript.py or any other type of exception), I can catch the exception, and return a nicely formatted HTML file with debugging information.
My question is, how do I 'pass' the script I want to run as an argument to my debug script?
From the command line, it would be easy, I would do something like this:
$ python debug.py myscript.py
How can I do this using WSGI though? Any ideas?

PHP script stops running arbitrarily with no errors

I have a PHP script that seemed to stop running after about 20 minutes.
To try to figure out why, I made a very simple script to see how long it would run without any complex code to confuse me.
I found that the same thing was happening with this simple infinite loop. At some point between 15 and 25 minutes of running, it stops without any message or error. The browser says "Done".
I've been over every single possible thing I could think of:
set_time_limit ( session.gc_maxlifetime in the php.ini)
memory_limit
max_execution_time
The point that the script is stopped is not consistent. Sometimes it will stop at 15 minutes, sometimes 22 minutes.
Please, any help would be greatly appreciated.
It is hosted on a 1and1 server. I contacted them and they don't provide support for bugs caused by developers.
At some point your browser times out and stops loading the page. If you want to test, open up the command line and run the code in there. The script should run indefinitely.
Have you considered just running the script from the command line, eg:
php script.php
and have the script flush out a message every so often that its still running:
<?php
while (true) {
doWork();
echo "still alive...";
flush();
}
in such cases, i turn on all the development settings in php.ini, of course on a development server. This display many more messages, including deprecation warnings.
In my experience of debugging long running php scripts, the most common cause was memory allocation failure (Fatal error: Allowed memory size of xxxx bytes exhausted...)
I think what you need to find out is the exact time that it stops (you can set an initial time and keep dumping out the current time minus initial). There is something on the server side that is stopping the file. Also, consider doing an ini_get to check to make sure the execution time is actually 0. If you want, set the time limit to 30 and then EVERY loop you make, continue setting at 30. Every time you call set_time_limit, the counter resets and this might allow you to bypass the actual limits. If this still isn't working, there is something on 1and1's servers that might kill the script.
Also, did you try the ignore_user_abort?
I appreciate everyone's comments. Especially James Hartig's, you were very helpful and sent me on the right path.
I still don't know what the problem was. I got it to run on the server with using SSH, just by using the exec() command as well as the ignore_user_abort(). But it would still time out.
So, I just had to break it into small pieces that will run for only about 2 minutes each, and use session variables/arrays to store where I left off.
I'm glad to be done with this fairly simple project now, and am supremely pissed at 1and1. Oh well...
I think this is caused by some process monitor killing off "zombie processes" in order to allow resources for other users.
Run the exec using "2>&1" to log anything including stderr.
In my output I managed to catch this:
...
script.sh: line 4: 15932 Killed php5-cli -d max_execution_time=0 -d memory_limit=128M myscript.php
So something (an external force, not PHP itself) is killing my process!
I use IdWebSpace which is excellent BTW but I think most shared hosting providers impose this resource/process control mechanism just to be sane.