Is it possible to create run configurations where it doesn't wait for the external tool to exit before launching?
I'm currently trying to get Dart's pub serve to run on my project directory before opening the Dartium browser on localhost:8080 but it seems that it's just waiting for pub to exit before it does. Which won't happen as pub serve continues to show output from the local server.
Any ideas?
Probably not the best answer, but this should work:
Create a wrapper script, say pub_async.bat, that asynchronously calls pub.bat with correct command line arguments and returns immediately.
You may lose the ability to "stop" the process, and so shutting down / cleaning up may need to be handled in a different way.
Related
I am trying to run a series of commands to configure a vlan on a Dell EMC OS10 server using Paramiko. However I am running into a rather frustrating problem.
I want to run the following
# configure terminal
(config)# interface vlan 3
(conf-if-vl-3)# description VLAN-TEST
(conf-if-vl-3)# end
However, I can't seem to figure out how to achieve this with paramiko.SSHClient().
When I try to use sshclient.exec_command("show vlan") it works great, it runs this command and exits. However, I don't know how to run more than one command with a single exec_command.
If I run sshclient.exec_command("configure") to access the configuration shell, the command completes and I believe the channel is closed, since my next command sshclient.exec_command("interface vlan ...") is not successful since the switch is no longer in configure mode.
If there is a way to establish a persistent channel with exec_command that would be ideal.
Instead I have resorted to a function as follows
chan = sshClient.invoke_shell()
chan.send("configure\n")
chan.send("interface vlan 3\n")
chan.send("description VLAN_TEST\n")
chan.send("end\n")
Oddly, this works when I run it from a Python terminal one command at a time.
However, when I call this function from my Python main, it fails. Perhaps the channel is closed too soon when it goes out of scope from the function call?
Please advise if there is a more reasonable way to do this
Regarding sending commands to the configure mode started with SSHClient.exec_commmand, see:
Execute (sub)commands in secondary shell/command on SSH server in Python Paramiko
Though it's quite common that "devices" do not support the "exec" channel at all:
Executing command using Paramiko exec_command on device is not working
Regarding your problem with invoke_shell, it's quite possible that the server needs some time to get ready for the next command.
Quick-and-dirty solution is to "sleep" shortly between the individual send calls.
Better solution to is to wait for command prompt before sending the next command.
I am posting data to a REST API using HTTP POST requests. The jmeter setup is single thread. As i am making 200,000 post calls i want to pause the run when needed and i want to resume the run when needed.
NOTE: i am running the jmeter in NON GUI mode on Linux server, which will not have a GUI for anything.
Other important thing is i can't program it before the run starts because i'm not sure when to pause the suit or when to resume it.
JMeter-specific solution would be using Beanshell Server and Constant Throughput Timer combination.
Add Constant Throughput Timer to your test plan and set your desired throughput in requests per minute. If you don't want to limit JMeter - set it to something very high using __P() function
${__P(throughput,10000000)}
Enable Beanshell Server by adding the next 2 lines to user.properties file:
beanshell.server.port=9000
beanshell.server.file=../extras/startup.bsh
Create 2 scripts like
suspend.bsh containing the next line:
setprop(throughput, 0);
and resume.bsh containing the next line:
setprop(throughput, 10000000);
Whenever you need to suspend your test invoke the following command from "lib" folder of your JMeter installation:
java -jar bshclient.jar localhost 9000 /path/to/your/suspend.bsh
Chwck out How to Change JMeter´s Load During Runtime article for more details.
Linux specific solution would be using kill command like:
to suspend: kill -SIGSTOP JMETER_JAVA_PID
to continue: kill -SIGCONT JMETER_JAVA_PID
where JMETER_JAVA_PID is process id of the JVM which is running JMeter, you can find this out using jps command
I have a simple twisted application which I run using a systemd service, executing a script, which subsequently executes a .tac file.
The application is structured as a JSON RPC endpoint (fastjsonrpc), built into a t.w.r.Resource, which is in a t.w.s.Site, and served t.a.i.TCPServer, and the whole thing packed into a t.a.Application. This works fine.
Where I do run into trouble is when I try to warm up caches at startup. This warm-up process is pretty slow (~300 seconds), and makes systemd timeout and kill the process. Increasing the timeout is not really a viable option, since I wouldn't want this to block system boot.
Analogous code is used in a separate stack running on Flask from within Apache and wsgi. That server starts itself off and lets systemd go on while it takes its time building the caches. This behaviour is fine for me.
I've tried calling the warmup function using the following within the setup function of the t.w.r.Resource:
reactor.callLater(1, ep.warmup, None)
I've not yet tried using this from within systemd, and have been testing it from twistd directly on the command line. The server does work as expected, however it no longer responds to SIGINT (^C). Removing the callLater is all that's needed to let the server respond to SIGINT.
If the warmup function is called directly (not by callLater, i.e., the arrangement which makes systemd give up while waiting for warm up to complete), the resulting server also continues to respond to SIGINT.
Is there a better / good way to handle this sort of long-running warmup code?
Why would twistd / the reactor not respond to SIGINT? Am I missing something here?
Twisted is a single-threaded thing. It sounds like your "cache warmup" code is blocking the reactor for those 300 seconds. One easy way to fix this would be using deferToThread to let it run without blocking the reactor.
Is there a way to know the return code or process ID of the process which gets executed when the privileged helper tool is installed as a launchdaemon and launched via SMJobSubmit().
I have an application which to execute some tasks in privileged manner uses the SMJobSubmit API as mentioned here.
Now in order to know whether the tasks succeeded or not, I will have to do one of the following.
The best option is to get the return code of the executable that ran.
Another option would be if I could create a pipe between my application and the launchd.
If the above two are not possible, I will have to resort to some hack like writing a file in /tmp location and reading it from my app.
I guess SMJobSubmit internally submits the executable with a launchdaemon dictionary to the launchd which is then responsible for its execution. So is there a way I could query launchd to find out the return code for the executable run with the label "mylabel".
There is no way to do this directly.
SMJobSubmit is a simple wrapper around a complicated task. It also returns synchronously despite launching a task asynchronously. So, while it can give you an error if it fails to submit the job, if it successfully submits a job that fails to run, there is no way to find that out.
So, you will have to explicitly write some code to communicate from your helper to your app, to report that it's up and running.
If you've already built some communication mechanism (signals, files, Unix or TCP sockets, JSON-RPC over HTTP, whatever), just use that.
If you're designing something from scratch, XPC may be the best answer. You can't use XPC to launch your helper (since it's privileged), but you can manually create a connection by registering a Mach service and calling xpc_connection_create_mach_service.
I am running a rewrite map with an external rewrite program (prg) in apache2 that may produce an error and die.
When the rewrite map is not running any more the system obviously doesn't function properly.
So I wanted to start a simple wrapper shell script that itself executes map program (which is written in php) and restarts it if it dies:
#!/bin/bash
until /usr/bin/php /somepath/mymap.php; do
echo "map died but i will restart it right away!"
done
If I try that in the shell by hand, it works fine, however it does not work when started by the webserver.
...and then communicates with the rewriting engine via its stdin and
stdout file-handles. For each map-function lookup it will receive the
key to lookup as a newline-terminated string on stdin. It then has to
give back the looked-up value as a newline-terminated string on stdout
or the four-character string ``NULL'' if it fails...
The reason seems pretty clear to me. The first script takes stdin but doesn't redirect it to the sub script.
I guess I somehow need to define a descriptor using exec and redirect stdin/stdout of the scripts properly. But how do I do that?
It's a common problem that some script works executed "by hand" and do not work when executed indirectly (via cron or from apache).
Usually the root cause is one of:
your script needs some extra environment varaible (PATH, LD_LIBRARY_PATH, etc.)
your script requires terminal
First thing to do is to grab some debug info, so add to your script:
env > /tmp/$0.env # get environment
and get stderr:
... /usr/bin/php /somepath/mymap.php 2>/tmp/$0.stderr ...
This may lead you to the solution.
If your script requires terminal and you cannot fix it you can run your script via gnu screen.
Good luck.
Michał Šrajer has given one very common cause of trouble (environment). You should certainly be sure that the environment is sufficiently well set up, because Apache sets its own environment rigorously and does not pass on any inherited junk values; it only passes what it is configured to pass (SetEnv and PassEnv directives, IIRC).
Another problem is that you think your mapping process will fail; that's worrying. Further, it is symptomatic of yet another problem, which I think is the main one.
The first time the map process is run, it will read the request from the web server, but if the mapping fails, you rerun it - but the web server is still waiting for the output from the original request, and the map process is waiting for the input, so there's an impasse.
Note that a child process automatically inherits the standard input and standard output of its parent unless you do something to change it.
If you think things might fail, you'll need to capture the standard input so that when you rerun the program, you can resupply the input (though why it would work on the second time when it failed on the first is a separate mystery).
Maybe:
if tee /tmp/xx.$$ | /usr/bin/php /somepath/mymap.php
then : OK
else
until /usr/bin/php /somepath/mymap.php < /tmp/xx.$$
do echo "map died but I will restart it right away!"
done
fi
rm -f /tmp/xx.$$
Unresolved issues include:
You should add traps to ensure that the temporary file is removed;
You should probably not use /tmp as the directory;
The messages probably do not get sent to the web server and from thence to the client browser until the overall script terminates, so the echoed messages simply mess up the start of the response;
There is no limit on the number of failures;
There is no logging of the failures;
There is no attempt to fix the problem that caused the failure;
And there are probably others I've not thought of yet.
until is not a valid keyword, you probably have it aliased and aliases do not work on scripts. my mistake, it seems it is.
regardless, this is what you want to do:
while true; do
/usr/bin/php /somepath/mymap.php
done
if this also fails then yes, either your program expects a terminal or you have something missing in your env.