I have an e2e test suite which loads some fixtures in in the database by calling a script on the server side using an SSH connection.
I want to keep the fixtures that i load local to test that needs them. I would write a test something like
class ExampleTests(BaseTest):
def test_A(self):
load_fixture('TEST_A')
do_actual_test()
def test_B(self):
load_fixture('TEST_B')
do_actual_test()
In my load_fixture method the SSH connection is made and the script is run on the server side.
If i run the entire test suite it will create a new SSH connection each time I call the load_fixture method. Conceptually this is what i want. I don't want to load all my fixtures for all my tests before any test runs. I want to be able to run fixtures when i need them. e.g.
class ExampleTests(BaseTest):
def test_B(self):
user_a = load_user_fixture('username-A')
do_some_testing_on_user_a()
load_post_fixture_for_user(user_a, subject='subject-a')
do_tests_using_post()
In this test it would also create 2 ssh connections.
So what i want to have happen is that the first time i call the load_fixture method it creates the connection but keeps it around for the duration of the test suite. Or i create a connection before any test runs and then use that connection whenever i load a fixture.
Of course it should keep working when i run the tests over multiple core.
My load_fixture function looks something like:
def load_fixtures(connection_info, command, fixtures):
out, err, exit_code = run_remote_fixture_script(connection_info, command, fixtures)
def run_remote_fixture_script(connection_info, command_name, *args):
ssh = SSHClient()
ssh.connect(...)
command = '''
./load_fixture_script {test_target} {command} {args};
'''.format(
test_target=connection_info.target,
command=command_name,
args=''.join([" '{}'".format(arg) for arg in args])
)
stdin, stdout, stderr = ssh.exec_command(command)
exit_code = stdout.channel.recv_exit_status()
ssh.close()
return stdout, stderr, exit_code
I also want to reopen the connection automatically if for any reason the connection closes.
You need to use
#pytest.fixture(scope="module")
Keeping the scope as module will keep it for whole test suite.
and finalizer method within your fixture
`def run_remote_fixture_script(connection_info, command_name, *args):
ssh = SSHClient()
ssh.connect(...)
command = '''
./load_fixture_script {test_target} {command} {args};
'''.format(
test_target=connection_info.target,
command=command_name,
args=''.join([" '{}'".format(arg) for arg in args])
)
stdin, stdout, stderr = ssh.exec_command(command)
exit_code = stdout.channel.recv_exit_status()
def fin():
print ("teardown ssh")
ssh.close()
request.addfinalizer(fin)
return stdout, stderr, exit_code
Please Excuse the formating of code. You could see this link for more details
And you would call this fixture as
def test_function(run_remote_fixture_script)
output = run_remote_fixture_script
Hope this helps .
Finalizer method will be called end of test suite , if scope is method it will be called after method
Related
I understand how to create a ssh shell
Shell ssh = new SshByPassword("192.168.1.5", 22, "admin", "password");
i also understand how to run a command
String output = new Shell.Plain(ssh).exec("some command");
and i can easly analyze the output string
but how do i send in the same "shell" one command after the other
and bonus question sometimes the commands require a user confirmation ("press Y to continue")
is it possible with the library?
Generally, most Java SSH APIs leave it to the developer to sort out the complexities of executing multiple commands within a shell. It is a complicated problem because SSH does not provide any indication of where commands start and end within the shell; the protocol only provides a stream of data, which is the raw output of the shell.
I would humbly like to introduce my project Maverick Synergy. An open-source API (LGPL) that does provide an interface for interactive shells. I documented the options for interactive commands in an article.
Here is a very basic example, the ExpectShell class allows you to execute multiple commands, each time returning a ShellProcess that encapsulates the command output. You can use the ShellProcess InputStream to read the output, it will return EOF when the command is done.
You can also use a ShellProcessController to interact with the command as this example shows.
SshClient ssh = new SshClient("localhost", 22, "lee", "xxxxxx".toCharArray());
ssh.runTask(new ShellTask(ssh) {
protected void onOpenSession(SessionChannelNG session)
throws IOException, SshException, ShellTimeoutException {
ExpectShell shell = new ExpectShell(this);
// Execute the first command
ShellProcess process = shell.executeCommand("ls -l");
process.drain();
String output = process.getCommandOutput();
// After processing output execute another
ShellProcessController controller =
new ShellProcessController(
shell.executeCommand("rm -i file.txt"));
if(controller.expect("remove")) {
controller.typeAndReturn("y");
}
controller.getProcess().drain();
}
});
ssh.disconnect();
I see online that the way to run a simple python file from groovy is:
def cmdArray2 = ["python", "/Users/test/temp/hello.py"]
def cmd2 = cmdArray2.execute()
cmd2.waitForOrKill(1000)
log.info cmd2.text
If hello.py contains - Print "Hello". It seems to work fine
But when I try to run a .py file containing the below Selenium code nothing happens.
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.PhantomJS(executable_path=r'C:\phantomjs\bin\phantomjs.exe')
driver.get("http://www.google.com") # Load page
# 1 & 2
title = driver.title
print title, len(title)
driver.quit()
Any help would be appreciated.
FYI - I have tried using all the browsers including headless browsers but no luck.
Also, I am able to run the selenium script fine from the command line. But when I run from the SOAPUI, I get no errors, the script runs and I do not see anything in the log
Most likely you do not see any errors from your python script because they are printed to stderr not stdout (as you expected when called cmd2.text). Try this groovy script to check error messages from python script stderr
def cmdArray2 = ["python", "/Users/test/temp/hello.py"]
def process = new ProcessBuilder(cmdArray2).redirectErrorStream(true).start()
process.inputStream.eachLine {
log.warn(it)
}
process.waitFor()
return process.exitValue()
Another thing you might want to try is using selenium directly from groovy without calling external python script.
I would like to execute any bash command. I found Command::new but I'm unable to execute "complex" commands such as ls ; sleep 1; ls. Moreover, even if I put this in a bash script, and execute it, I will only have the result at the end of the script (as it is explain in the process doc). I would like to get the result as soon as the command prints it (and to be able to read input as well) the same way we can do it in bash.
Command::new is indeed the way to go, but it is meant to execute a program. ls ; sleep 1; ls is not a program, it's instructions for some shell. If you want to execute something like that, you would need to ask a shell to interpret that for you:
Command::new("/usr/bin/sh").args(&["-c", "ls ; sleep 1; ls"])
// your complex command is just an argument for the shell
To get the output, there are two ways:
the output method is blocking and returns the outputs and the exit status of the command.
the spawn method is non-blocking, and returns a handle containing the child's process stdin, stdout and stderr so you can communicate with the child, and a wait method to wait for it to cleanly exit. Note that by default the child inherits its parent file descriptor and you might want to set up pipes instead:
You should use something like:
let child = Command::new("/usr/bin/sh")
.args(&["-c", "ls sleep 1 ls"])
.stderr(std::process::Stdio::null()) // don't care about stderr
.stdout(std::process::Stdio::piped()) // set up stdout so we can read it
.stdin(std::process::Stdio::piped()) // set up stdin so we can write on it
.spawn().expect("Could not run the command"); // finally run the command
write_something_on(child.stdin);
read(child.stdout);
I'm attempting to use the TProcess unit to execute ssh to connect to one of my servers and provide me with the shell. It's a rewrite of one I had in Ruby as the execution time for Ruby is very slow. When I run my Process.Execute function, I am presented with the shell but it is immediately backgrounded. Running pgrep ssh reveals that it is running but I have no access to it whatsoever, using fg does not bring it back. The code is as follows for this segment:
if HasOption('c', 'connect') then begin
TempFile:= GetRecord(GetOptionValue('c', 'connect'));
AProcess:= TProcess.Create(nil);
AProcess.Executable:= '/usr/bin/ssh';
AProcess.Parameters.Add('-p');
AProcess.Parameters.Add(TempFile.Port);
AProcess.Parameters.Add('-ntt');
AProcess.Parameters.Add(TempFile.Username + '#' + TempFile.Address);
AProcess.Options:= [];
AProcess.ShowWindow:= swoShow;
AProcess.InheritHandles:= False;
AProcess.Execute;
AProcess.Free;
Terminate;
Exit;
end;
TempFile is a variable of type TProfile, which is a record containing information about the server. The cataloging system and retrieval works fine, but pulling up the shell does not.
...
AProcess.ShowWindow:= swoShow;
AProcess.InheritHandles:= False;
AProcess.Execute;
AProcess.Free;
...
You're starting the process but not waiting for it to exit. This is from the documentation on Execute:
Execute actually executes the program as specified in CommandLine, applying as much as of the specified options as supported on the current platform.
If the poWaitOnExit option is specified in Options, then the call will only return when the program has finished executing (or if an error occured). If this option is not given, the call returns immediatly[sic], but the WaitOnExit call can be used to wait for it to close, or the Running call can be used to check whether it is still running.
You should set the poWaitOnExit option in options before calling Execute, so that Execute will block until the process exits. Or else call AProcess.WaitOnExit to explicitly wait for the process to exit.
I wrote some script which renders scenes and want see output on console I am using print but it not works what I should use to print something.
I run script with:
blender -b -P render.py
Want output such string from render.py:
print '#' * 80
It is little trivial question but print not works and not know how to progress development without debug messages.
use the logging module to setup your custom logger.
you can setup a Console handler to log the content to the console or/and
formatter = logging.Formatter('%(message)s')
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(formatter)
setup a File handler if you want to log to a file:
file_handler = logging.FileHandler(log_file)
file_handler.setFormatter(formatter)
# Add the handler to the logger:
logger.addHandler(console_handler)
logger.addHandler(file_handler)
They can both have different log levels which you can set via script or environement variable:
log_level = level
if 'LOG_LEVEL' in os.environ:
log_level = os.environ['LOG_LEVEL']
console_handler.setLevel(log_level)
file_handler.setLevel('INFO')
Read trough:
https://docs.python.org/3/howto/logging.html