Now I am using VMware is based on ubuntu (named OS-1).
When I operate the another VMware (OS-2 is also based on ubuntu) in OS-1,
I would like to send command (OS-1) for executing specific script file from OS-1 to OS-2 and also receive the stdout from OS-2.
Is it possible?
OS-1 :
Receiving the specific command for executing the test.py from webserver.
Sending the command such as "python test.py" to OS-2.
OS-2 :
Receiving the command from OS-1.
Returning the stdout result to OS-1 such as "test script"
*** WebServer(in OS-1) ---> OS1 ---> OS2
test.py
print("===========");
print("test script");
The most obvious solution is to create internal network between those two virtual machines.
When those machines are connected it would be relatively simple to execute command, i.e you may use ssh (hint https://stackoverflow.com/a/3586168/3188346).
It is worth to note that this solution will work if you decide to use other VMs provider or dedicated servers.
Related
I am trying to run a series of commands to configure a vlan on a Dell EMC OS10 server using Paramiko. However I am running into a rather frustrating problem.
I want to run the following
# configure terminal
(config)# interface vlan 3
(conf-if-vl-3)# description VLAN-TEST
(conf-if-vl-3)# end
However, I can't seem to figure out how to achieve this with paramiko.SSHClient().
When I try to use sshclient.exec_command("show vlan") it works great, it runs this command and exits. However, I don't know how to run more than one command with a single exec_command.
If I run sshclient.exec_command("configure") to access the configuration shell, the command completes and I believe the channel is closed, since my next command sshclient.exec_command("interface vlan ...") is not successful since the switch is no longer in configure mode.
If there is a way to establish a persistent channel with exec_command that would be ideal.
Instead I have resorted to a function as follows
chan = sshClient.invoke_shell()
chan.send("configure\n")
chan.send("interface vlan 3\n")
chan.send("description VLAN_TEST\n")
chan.send("end\n")
Oddly, this works when I run it from a Python terminal one command at a time.
However, when I call this function from my Python main, it fails. Perhaps the channel is closed too soon when it goes out of scope from the function call?
Please advise if there is a more reasonable way to do this
Regarding sending commands to the configure mode started with SSHClient.exec_commmand, see:
Execute (sub)commands in secondary shell/command on SSH server in Python Paramiko
Though it's quite common that "devices" do not support the "exec" channel at all:
Executing command using Paramiko exec_command on device is not working
Regarding your problem with invoke_shell, it's quite possible that the server needs some time to get ready for the next command.
Quick-and-dirty solution is to "sleep" shortly between the individual send calls.
Better solution to is to wait for command prompt before sending the next command.
I am running a simple SparkStreaming application, that consists in sending messages through a socket server to the SparkStreaming Context and printing them.
This is my code, which I am running in IntelliJ IDE:
SparkConf sparkConfiguration= new SparkConf().setAppName("DataAnalysis").setMaster("spark://IP:7077");
JavaStreamingContext sparkStrContext=new JavaStreamingContext(sparkConfiguration, Durations.seconds(1));
JavaReceiverInputDStream<String> receiveData=sparkStrContext.socketTextStream("localhost",5554);
I am running this application in a standalone cluster mode, with one worker (an Ubuntu VM) and a master (my Windows host).
This is the problem: When I run the application, I see that it successfully connected to the master, but it doesn't print any lines:
it just stays this way permanently.
If I go to the Spark UI, I find that the SparkStreaming Context is receiving inputs, but they are not being processed:
Can someone help me please? Thank you so much.
You need to perform below.
sparkStrContext.start(); // Start the computation
sparkStrContext.awaitTermination(); // Wait for the computation to terminate
Once you do this , you need to post the messages at port 5554 , for this you will first need to run Netcat (a small utility found in most Unix-like systems) as a data server by using and start pushing the stream.
For example , you need to do like below.
TERMINAL 1:
# Running Netcat
$ nc -lk 5554
hello world
TERMINAL 2: RUNNING Your streaming program
-------------------------------------------
Time: 1357008430000 ms
-------------------------------------------
hello world
...
...
You can check similar example here
I have installed and configured tensorflow serving on an "AWS t2.large Ubuntu 14.04" server.
When I attempt to test the server with the mnist_client utility by executing the command, bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000, I receive the following error message:
E0605 05:03:54.617558520 1232 tcp_client_posix.c:191] failed to connect to 'ipv4:127.0.0.1:9000': timeout occurred
Any idea how to fix this?
I haven't heard of anything like this, but did note that (at least with other test clients) when the server was not ready/up yet, requests would timeout. So my guess is that the server is not up yet or perhaps in some other bad state.
I met the same problem before. The root cause is mnist_client was run in my local machine instead of the server, because the command connects to localhost bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000
It succeeds when I run mnist_client utility in the server.
I'm new to Fabric, so this might have a simple answer I've missed due to bad search terminology.
I'm trying to start a new ubuntu EC2 instance in AWS, then connect to it with Fabric and have it execute several commands. However, it seems there is a problem with Fabric's SSH connection, maybe I'm defining some env variable wrong?
#task //starts new EC2 instance and sets env variables
def prep_deploy():
//code to start new EC2 instance, named buildhost
env.hosts=[buildhost.public_dns_name]
env.user = "ubuntu"
env.key_filename = ".../keypair.pem"
env.port = 22
#task
def deploy():
run("echo $HOME") //code fails here
....
I run fab prep_deploy deploy, since I read you need a new task for the new env variables to take effect.
I get
Fatal error: Timed out trying to connect to ...amazonaws.com (tried 1 time)
Underlying exception: timed out
The security groups for the instance are open to SSH: I can connect through Putty. In fact, if I empty the `env.host_string' variable at the start of deploy(), when it prompts me to manually input a host, I can write in "ubuntu#...amazonaws.com:22", with the host name exactly as seen from output at the task start, and it will connect to the instance. But I can't figure how to manipulate the environment variables so that it understands the host name.
It looks like your fabric settings are correct with the use of variables. I was able to use the code you provided to connect to my Ubuntu VM. I am wondering if you are having a connection issue due to the amazon Instance not being fully booted and ready for connections when your script runs the second task. I have run into that issue on a different VM hosts.
I added the following code to check and try the connection again. This might help you
import socket
import time
def waitforssh():
s=socket.socket()
address=env.host_string
port=22
while True:
time.sleep(5)
try:
s.connect((address,port))
return
except Exception,e:
print "failed to connec to %s:%s %(address,port)
pass
insert the function call into your deploy task
def deploy():
waitforssh()
This should test the connection. If the port does not respond, it will wait 5 seconds and try again.
That could explain why your second attempt to connect works.
I just encountered and circumvented a problem in Matlab, but I'm still wondering why this happens, and I also want to leave the information here for future reference.
In Matlab's Parallel Computing Toolbox, the command matlabpool local starts a local pool of Matlab workers which are then used transparently to speed up commands like parfor by distributing processing onto the different CPU cores. I tried to do so on a Linux machine which I connected to through ssh from my home Linux computer. I used ssh without X forwarding because the script I wanted to run only computes and saves the result, but does not produce graphical output.
The problem: matlabpool hung forever, without any indication of the cause. I restarted the remote machine, restarted Matlab, checked for license problems, without result.
The problem was resolved however when I closed ssh and logged back in, this time including the -X option for X11 forwarding – even though then I started Matlab with the -nodesktop option.
Does anyone have an idea why matlabpool on Linux appears to depend on access to X11?
Even though matlabpool starts and communicates with background headless workers, you can still create figures and plots and print/export them as images inside the parfor parallel loop. The following is a valid use case:
matlabpool(..)
parfor i=1:4
plot(..)
print(...)
close(..)
end
To me this suggests that background workers will still depend on graphics capabilities to generate the invisible plots in memory (maybe it's using virtual framebuffers and such). Of course this is just speculation on my part :)
EDIT:
Just to be sure here, can you try the following sequence of commands:
[client]$ ssh -v -x user#server # X11 forwarding disabled
[server]$ unset DISPLAY # clear $DISPLAY variable
[server]$ nohup matlab -nodesktop -nodisplay -noFigureWindows -nosplash \
-r "myscript; quit;" 2>&1 < /dev/null &
Where the script contains a simple parallel test like:
myscript.m
parpool('local',2)
smpd
fprintf('hello from lab#%d', labindex);
end
delete(gcp('nocreate'))
If MATLAB still hangs, try adding the -debug start option:
matlab -debug starts MATLAB and displays debugging information that can be useful, especially for X based problems.