I user SSH to execute robot framework testcases (selenium), but there is no browser opened, the testcases is executed in the background? How can I solve this issue?
I want to execute robot framework testcases on win10, and I want to start the test via Jenkins which is installed on Linux, so I installed a SSH plugin in the Jenkins, then I create a job in Jenkins and execute below command via SSH
pybot.bat --argumentfile E:\project\robot_framework\Automation\logs\argfile1.txt E:\project\robot_framework\Automation
when I start the job, the testcase is executed in the background, but I need the test case to open the browser in the front.
ssh by definition executes commands in a different session than the current user's.
Especially considering your target is a Windows machine - imagine if you were logged in and working with desktop apps, and someone starts an app through ssh in your session - would be a little surprising (mildly put :), wouldn't it?
To get what you want - being able to monitor the execution, you could try runas /user:username command, but a satisfactory end result is not guaranteed (plus you have to provide the target user's password).
Another option is to use psexec, available on TechNet, but YMMV - you have to provide the target session id, some programs might not show up - or might not be maximizable, etc. The call format would be
psexec.exe -i 2 pybot.bat the_rest_of_the_args
I.e. a lot of effort and uncertainties, with minimal ROI :)
Related
I am a QA automation engineer and in the web app I test there's a feature that creates Active Directory users.
My tools are - Selenium (Java), RemoteWebDriver, Selenium Grid (Docker)
I was trying to find ways to validate this process and came to a stop - this field (AD) is new
to me and I need to find a way to make sure the user was created and can be logged into in the
network.
I was trying to find a way to do this and came up with 2 options, where the first one is the least
preferred way:
Make a request (API? 3rd side tool?) to get the relevant user(s).
The issue:
A user created and registered in the AD doesn't necessarily mean that the client can log into it (at least by the way I understood how AD works), and so it loses the most important consequence of the feature.
Use a VM, get the AD user information (username + password: possible) and try to log into the VM using those details.
The issue:
I haven't came across a tool that does it, the closest thing is Robot class or WinAppDriver.
WinAppDriver seems like the best solution as of now although I don't know how to make the login process work since it's the process starts before the desktop is open and I don't know how to locate the username and password field, so I figured using Robot class seems like the simplest solution, if it works on a VM that is, which as of now doesn't seem like it does.
So, before advancing on learning how to use WinAppDriver with my current automation, I'd like and appreciate your opinions about the matter or if you have simpler solutions.
Thank you very much for reading!
• We can check whether a user is created successfully or not and if that user can log in to the AD domain or not by executing a script as below. It is a powershell script that auto logs in through remote desktop protocol in the other domain joined VM from an Azure domain joined VM that checks whether the recently created user can login or not.
Powershell script : -
cmdkey /list | ForEach-Object{if($_ -like "*target=TERMSRV/*"){cmdkey /del:($_ -replace " ","" -replace "Target:","")}}
echo "Connecting to 192.168.1.100"
$Server="192.168.1.100"
$User="Administrator"
$Password="AdminPassword"
cmdkey /generic:TERMSRV/$Server /user:$User /pass:$Password
mstsc /v:$Server
• In the above script, replace the ‘$user’ value by the user principal name of the newly created user, i.e., ‘$User=”testdemo#example.com”’ and the ‘$Password’ value by the password set for that user. Also, ensure that you replace and enter the correct IP address of the domain controller/AD server. Also, ensure that before executing the above powershell script, execute the below commands in an elevated (administrator privileges) powershell console.
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned
Lastly, please ensure that while creating the user, the option ‘User must change password at next logon’, ‘Account is Disabled’, ‘Password never expires’ and ‘User cannot change password’ are unchecked and not selected.
• Also, you can use the below command line script for logging in to the domain joined Azure VM through RDP protocol. In the below command, replace the ‘username’ and ‘password’ with the username and password of the user created recently to log in to the Azure VM with this command line script. Also, replace the ‘TERMSRC’ with the hostname of the server system or the domain joined VM where the specified UNC path is located and replace the ‘some_unc_path’ with the actual path UNC path of the shared directory folder. Please execute the below command through elevated (administrator privileges) command prompt.
Command script: -
c:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -nolog -command cmdkey /generic:TERMSRC/some_unc_path /user:username /pass:pa$$word; mstsc /v:some_unc_path
Are there any way, to prevent the internet access of the tester application under the test whit Test Complete? I'd like to test the application's reaction of the lost of the internet connection, but I have to perform this whith a CI tool, which means the Test Complete have to block and unblock to connection.
You can do this using WMI directly from a script (see Working With WMI Objects in Scripts) or by executing a PowerShell script (see Running PowerShell Scripts From TestComplete).
For example, see this question to get a sample PS script:
Command/Powershell script to reset a network adapter
I have a web server running on a virtual machine and I need some actions (e.g. "service apache2 reload") to be performed there automatically after I'll deploy my code from Idea
Automatically -- no way AFAIK.
https://youtrack.jetbrains.com/issue/WI-3344 -- watch this ticket (star/vote/comment) to get notified on any progress.
You may also watch related tickets:
https://youtrack.jetbrains.com/issue/WI-23938
https://youtrack.jetbrains.com/issue/WI-3239
The only manual solutions I may suggest right now are:
either keep SSH console opened (IDE has it built-in) and execute such command manually once deployed
or create "Remote SSH External Tools" entry that will do such job (connect and issue specified command) manually after deployment (once created you can assign custom shortcut to it so it can be run more easier).
In both cases -- check this manual.
I'm using SSH to remotely launch Tornado on Amazon Web Service. It works fine when I launch it by:
python startTornado.py
However, after my SSH session times out or terminated, the Tornado server is also stopped immediately, so I can't access the webpage anymore. I did quite some search but couldn't find an answer on Google.
How can I keep Tornado and the site running after my SSH session terminated?
The process will shut down when you logout if it's running in the foreground or if it tries to write to stdout and the terminal it's outputting to no longer exists. Try starting the server with
nohup python startTornado.py &
The nohup command redirects output to a file, and the & at the end runs the command in the background. Alternatively, you can use the screen utility which allows you to detach a terminal and reattach it in a different ssh session (see the screen man page for details).
While all the above solutions solve the immediate problem, what you might really need to run such processes in production, control them (start/restart/stop) is supervisor. It is python based and its more useful when you have to run multiple instances of tornado behind nginx.
In addition to nohup as Kevin has mentioned, you can also use disown command if you are using bash:
disown <job-id>
I am looking for some kind of framework that will allow me to do connect to multiple servers using SSH and keep the connection open, reopen it if it dies, and allow me to run commands to it and report back. For example, check disk space on all the machines right away, so I'd do results = object.run("df -h") and it returns an array with the response from all the machines (I am not looking for a monitoring system).
Anyone have any idea?
I would use Python and the Fabric framework. Lets you easily execute commands on a set of servers - like doing deployment
with Fabric you could do
from fabric import run, env
def getSpace(server):
env.host_string
run("df -h")
>>> fab getSpace("234.24.32.1")
One way to do this would be to use Python and the paramiko library. Writing the functionality that runs a given command on a specified set of servers is then a simple matter of programming.