Set timeout for system call in Rails - ruby-on-rails-3

From Rails, I make a system call to wget:
system("wget", ...)
I want to set a timeout for this call, so that if it takes too long (which is likely to mean too many files downloaded, or a large file downloaded), I want to stop it and return an error to the user, so that my server is not overloaded. How can I do that?

Do you specifically need to run the call in a subshell like that? If not, use timeout and backticks:
require 'timeout'
Timeout.timeout(3) do
puts `tree /` # raises an exception, which you can rescue and handle
end
If you do need to run it externally, though, I'd go with something like Subexec

In general, try wrapping the call in a SystemTimer. https://rubygems.org/gems/SystemTimer
In your particular case, try system("wget -T #{timeout_in_seconds}")

Related

Click: Test click.group commands without running their code

I´m currently writing tests for my application and therefore, I have to test some click.group commands I defined:
Let´s say I defined them like:
#click.group(cls=MyGroup)
#click.pass_context
def myapp(ctx):
init_stuff()
#myapp.command()
#click.option('--myOption')
def foo(myOption: str) -> None:
do_stuff() # change some files, print, create other files
I know that I could use the CliRunner from click.testing. However, I just want to make sure, that the command is called, but I DONT WANT it to execute any code (for example by applying the CliRunner.invoke()).
How could this be done?
I couldn´t come up with a solution using mocking with foo for example. Or do I have to execute code lets say using the isolated_filesystem() which CliRunner provides?
So the question is: What would be the most efficient way to test my commands when defined like shown above?
Many thanks in advance
You could add a --dry-run flag to your group or some commands, and save it it inside the context, and if the flag is enabled, do not execute any code. Then you can use CliRunner.invoke() with the --dry-run flag enabled and just check your invocations have happened, without actually executing the code.

vb.net help, how to skip deleting a file without permission

On VB.net i was making a file cleaner, one that deletes stuff like temp etc, however with folders like prefetch and temp where some are in use at the time is there a way to make the program skip over the undeletable files and clear everything else, thanks
You should use the following code on Button click event (administrator required):
Shell("CLEANMGR", "/d <drive_letter> /sagerun:64")
The following items may be deleted on execution of the above code:
Temporary Internet Files
Temporary Setup Files
Downloaded Program Files
Old CHKDSK Files
Recycle Bin
Temporary Files
Windows DUMP and Error logs
Source of using CleanMGR: cleanmgr command line switches...
You need to wrap your delete routine in a try catch statement:
Try
System.IO.File.Delete([FILENAME])
Catch ex as Exception
'Log out to console
Console.WriteLine(ex.Message)
End Try
If your goal is to write this utility (so you aren't interested in re-using an existing utility that does the same thing), there isn't a good way to check to see if you can delete a file before you try. Some things you can't test for, and even when you can, there's no guarantee that the file will be in the same state when you try to delete it (even a very short time later), so your operation might fail anyway.
Thus, the only way to go is to put a Try/Catch around every file operation (or accept the possibility that a failure might lead to a crash).
The resulting pseudocode is something like:
For Each (item in top-level list of places to search)
Try
'If directory, try to enter it. If successful, recurse.
'If file, try to delete it.
Catch
'Can log or just skip, this is a valid case to eat an exception
'At a minimum, you might see a System.IO.IOException or a couple of different security-related exceptions
End Try
Next

logging program info to file in twisted

I have written a code in twisted .I need to write the log information in when we have call
d.addErrback(on_failure).
from twisted.python import log
log.startLogging(open('/home/crytek.etl/foo.log', 'w'))
def on_failure(failure):
log.msg(failure)
d.addErrback(on_failure)
Is this the correct way of implementing this.I don't get any values written to the file.Can someone suggest on how this can be implemented
You probably want to consider opening your log file in append mode. Otherwise, every time your application starts you'll wipe out all your old logs. This could make it appear as though the log messages you're expecting to see aren't being logged.
from twisted.python import log
log.startLogging(open('/home/crytek.etl/foo.log', 'a'))
You should also log failures using log.err instead of log.msg
def on_failure(failure):
log.err(failure)
And you can do this more easily since on_failure has exactly the same signature as log.err. Just write:
d.addErrback(log.err)
Also, I liked, log.err doesn't have exactly the same signature as on_failure. It is better, it accepts a 2nd argument which is used to present a header for the failure in the log file. You can use it like this:
d.addErrback(log.err, "Frobbing the widget failed")
This will present "Frobbing the widget failed" together with the failure in the log file.

Making sure data is loaded

I use the following command to load data.
/home/bigquery/bq load --max_bad_record=30000 -F '^' company.junelog entry.gz country:STRING,telco_name:STRING,datetime:STRING, ...
It has happened that when I got non-zero return code the data was still loaded. How do I make sure that the command is successful or not? Checking return code does not seem to help. There are times when I loaded the same file again because I got an error but the data was already available in bigquery.
You can use bq show -j of the load job and check job status.
If you are writing code to do the load, so you don't know the job id, you can pass the job id into the load operation (as long as it is unique) so you will know which job to check.
For instance you can run
/home/bigquery/bq load --job_id=some_unique_job_id --max_bad_record=30000 -F '^' company.junelog entry.gz country:STRING,telco_name:STRING,datetime:STRING, ...'
then
/home/bigquery/bq show --j some_unique_job_id
Note if you are creating new tables for every load (as opposed to appending), you could use the write disposition WRITE_EMPTY to make sure you only did the load if the table was empty, thus preventing adding the same data twice. This isn't directly supported in bq.py, but you could use the underlying bigquery_client.py to make this call, or use the REST api directly.

Asterisk with new functions

I created a write func odbc list records files in sql table:
[R]
dsn=connector
write=INSERT INTO ast_records (filename,caller,callee,dtime) VALUES
('${ARG1}','${ARG2}','${ARG3}','${ARG4}')
prefix=M
and set it in dialplan :
exten => _0X.,n,Set(
M_R(${MIXMONITOR_FILENAME}\,${CUSER}\,${EXTEN}\,${DTIME})= )
when I excute it I get an error : ast_func_write: M_R Function not registered:
note that : asterisk with windows
First thing I saw was you were performing the call to the function incorrectly...you need to be assigning values, not arguments....try this:
func_odbc.conf:
[R]
dsn=connector
prefix=M
writesql=INSERT INTO ast_records (filename,caller,callee,dtime) VALUES('${VAL1}','${VAL2}','${VAL3}','${VAL4}');
dialplan:
exten => _0X.,1,Set(M_R()=${MIXMONITOR_FILENAME}\,${CUSER}\,${EXTEN}\,${DTIME})
If that doesn't help you, continue on in my list :)
Make sure func_odbc.so is being loaded by Asterisk. (from the asterisk CLI: module show like func_odbc)... If it's not loaded, it can't "build" your custom odbc query function.
Make sure your DSN is configured in /etc/odbc.ini
Make sure that /etc/asterisk/res_odbc.conf is properly configured
Make sure you're calling the DSN by the right name (I see it happen all the time)
enable verbose and debug in your Asterisk logging, do a logger reload, core set verbose 5, core set debug 5, and then try the call again. when the call finishes, review the log, you'll see much more output regarding what happened...
Regarding the answer from recluze...Not to call you out here, but using a PHP AGI is serious overkill here. The func_odbc function works just fine, why create more overhead and potential security issues by calling an external script (which has to use a interpreter program on TOP itself)?
you should call func odbc function as "ODBC_connector". connector should be use in the func_odbc.conf file [connector]. In the dialplan it should call like this.
exten=> _0x.,n,ODBC_connector(${arg1},${arg2})
I don't really understand the syntax you're trying to use but how about using AGI (with php) for this. Just define your logic in a php script and call it from your dialplan as:
exten => _0X.,n,AGI(script-filename.php,${CUSER},${EXTEN},${DTIME})