Trying to run batch file in powershell remotely is not working - selenium

I am currently administering a Selenium Grid with 20 remote PCs acting as nodes to a single Hub located on a server. At the moment I have to remote in to each machine when I want to restart the hub or nodes and clear up any stale chromedriver or chrome instances. I am trying to automate this process via Powershell.
So far I have manage to write the ps scripts to kill any instances of chrome, chromedriver and java on the PCs and then restart the hub or node. They work when started locally on each machine when but fail when I try and execute them via a PSSession.
I have enabled remote sessions on each machine successfully and I can Invoke-Commands that will kill the existing instances of java and chrome but I can't restart the hub or nodes.
Example of Hub powershell script:
#This script kills any existing java process and runs StartHub.bat
Set-Location C:\Selenium
kill -Name java -Force -PassThru -ErrorAction Continue
Start-Process -FilePath "C:\Selenium\StartHub.bat" -PassThru -Verbose
The bat file is as follows:
java -jar C:\Selenium\selenium-server-standalone-3.4.0.jar -role hub -hubConfig "V:\ServerFiles\hubconfig.json"
I have been testing with the execution policy unrestricted and my network administrator has changed GPO's to allow my to start java processes remotely but it's just not working. I've tried several approaches which I have listed below:
1: Entering a PSSession on remote server and calling the ps1 file:
C:\RestartHub.ps1
The result being that the existing hub instance is killed but a new one does not open.
2: I have then tried to Start a job with a ScriptBlock calling the cmd script to start the batch file:
Set-Location C:\Selenium
kill -Name java -Force -PassThru -ErrorAction Continue
Start-Job -ScriptBlock{cmd /c start "C:\Selenium\StartHub.bat"} -Name Hub -Verbose
This again kills the existing hub instance but the start script does not run or fails silently.
I have looked through the security logs on the remote machine to see if there are any issues there but the PSSession seems to be correct using the right user with full admin rights.
I have also changed the ExecutionPolicy on the remote machine to restricted to see if an access denied error is display, which it was. I changed back to unrestricted and error went away.
I'd be grateful for any ideas.

Start-Process will start a process from an executable, you cannot use a bat file as an executable, -FilePath expects an executable's path
See below,
Start-Process cmd -Argumentlist "C:\Selenium\StartHub.bat" -PassThru -Verbose

Related

Running Powershell via SQL Server Job

I have a very basic script which runs fine on my local PC. It simply backs up some folders and their contents then adds a date stamp on the folder name.
The script backs up folders on two different servers (Server2 & Server3).
Set-ExecutionPolicy -Scope Process -ExecutionPolicy RemoteSigned
#Part 1
Copy-Item -Path "\\Server3\Example Location" -Destination "\\Server3\Example Location_$(get-date -f yyyyMMdd)" -Recurse
#Part 2
Copy-Item -Path "\\Server2\Example Location" -Destination "\\Server2\Example Location_$(get-date -f yyyyMMdd)" -Recurse
It runs both parts perfectly in the below environments:
On my local PC
When I remotely connect to Server 2 and right click > Run with PowerShell
When I remotely connect to Server 2 and edit > Run script
However, when I try to automate this and create a SQL Server Agent Job (again, on Server 2) only Part 2 actually backs up. The job successfully completes, but Part 1 appears to get ignored (i.e. running on Server 2, backing up on Server 3).
Any ideas why running as a SQL job would cause this?
n.b. The job is set to run as 'SQL Server Agent Service Account'.
Couple things...
If you run your PS script as a step type Operating system (CmdExec) then there will never be an error message returned to the SQL-Server Job.
With that being said, if your SQL-Server Agent Service account does not have access to the \Server3... folders, the copy will fail and as above said, no error message will be transferred to the calling SQL-Agent Job.

Sysinternals psexec not running on the remote desktop

I've got two Remote Desktops hosted by a Hyper-V.
On Remote Desktop "A", I've got a .bat file, which I want to execute.
On Remote Desktop "B", I've got a cmd open with psexec cmd ready to invoke .bat file on machine "A".
"path-to\\psexec.exe" \\ip -u domain\username -p pswd -i cmd.exe /c "path-to\\myFile.bat %*"
The script contained in .bat file on machine "A" operates on the UI and thus requires a real screen to be open, so I am connected to two RDs simultaneously. However, when I call psexec command on machine "B", the cmd returns an error, but if I open RD "A" directly through the server's Hyper-V manager's interface, the psexec command works as expected.
Can someone explain please why this happens?
The UI of Windows runs on session 0. To run a program remotely that uses session 0, it will need to run as the System user (-s flag), and you can specify the session to use (-i flag). This answer has a few related tips too.

"virsh list" command not showing VM created by "qemu-system-x86_64" command

I created a VM using "qemu-system-x86_64" command. The VM is up and running. I can access it and list it by command "ps -ef | grep qemu-system-x86_64.
But if I try to list the VM using "virsh list" command then I do not see it there. Could you please point me what could be the reason?
Why is "virsh list" command not able to list VMs created by "qemu-system" command? I thought that virsh is an application that uses libvirt to access KVM/linux's virtualization capabilities. So even if VM is created by any method, then also virsh should be able to query KVM to check the already running VMs on the host.
qemu-system-x86_64 is a backend used by virsh to start a VM. Although qemu-system-x86_64 is libvirt-dependent, it does not register any running instances inside virsh/libvirtd metadata.

Jenkins SSH remote process is getting killed as soon as the Jenkins SSH plugin returns back

Jenkins version: 1.574
I created a simple job which performs the following:
Using "Execute shell script on remote host using SSH" as one of the BUILD steps, I'm just calling a shell script. This shell script performs stop and start operations on Tomcat to restart an application on the target machine.
I have a valid username, password, port defined for the target SSH server in Jenkins Global settings.
I saw this behavior that when I run a Jenkins job and call the restart script (which gets the application name as parameter $1), it works fine, but as soon as "Execute shell script on remote host using SSH" step completes, I see the new process dies on the remote/target application server.
If I run the script from the target/remote server itself, everything works fine and the new process/PID remains live forever, but running the same script from Jenkins, though I don't see any errors and everything works as expected, the new process dies as soon as the above mentioned SSH step is complete and control comes back to the next BUILD step in Jenkins job OR the Jenkins job is complete.
I saw a few posts/blogs and tried setting: BUILD_ID=dontKillMe in the Jenkins job (in various places i.e. Prepare Environment variables and also using Inject Environment variables...). When the job's particular build# is complete, I can see Environment Variables for that build# does say BUILD_ID=dontKillMe as its value (instead of the default Timestamp tag value).
I tried putting nohup before calling the restart script, i.e.,
nohup restart_tomcat.sh "${app}"
I also tried:
BUILD_ID=dontKillMe nohup restart_tomcat.sh "${app}"
This doesn't give any error and creates a nohup.out file on the remote server (but I'm not worried about it as the restart_tomcat.sh script itself creates its own LOG file which I'm "cat"ing after the restart_tomcat.sh script is complete. cat'ing on the log file is performed using another "Execute shell script on remote host using SSH" build step, and it successfully shows the log file created by the restart script).
I don't know what I'm missing at this point, but as soon as the restart_tomcat.sh step is complete, the new PID/process on the remote/target server dies.
How can I fix this?
I've been through this myself.
On my first iteration, before I knew about Jenkins ProcessTreeKiller, I ended up just daemonizing Tomcat. The Apache Tomcat documentation includes a section on running as a daemon.
You can also try disabling the ProcessTreeKiller for your whole Jenkins instance, if it's relatively small (read the first link for information).
The BUILD_ID=dontKillMe should be passed to the shell, and therefore it should be in your command line, not in Jenkins global configuration or job parameters.
BUILD_ID=dontKillMe restart_tomcat.sh "${app}" should have worked without problems.
You can also try nohup restart_tomcat.sh "${app}" & with the & at the end.
My solution (it worked after trying everything else) in Ubuntu 14.04 (Trusty Tahr) (Amazon AWS - Amazon EC2), Jenkins 1.601:
Exec command: (setsid COMMAND < /dev/null > /dev/null 2>&1 &);
Exec in PTY: DISABLED
// Example COMMAND=socat TCP4-LISTEN:1337,fork TCP4:127.0.0.1:1338
I created this Transfer as my last one.
#!/bin/ksh
export BUILD_ID=dontKillMe
I added the above line to the start of my script and the issue was resolved.

Executing commands on command prompt of a remote computer

I need to execute the command :- Powermt display dev = all in the command prompt of a remote computer. How do I do that ?
If you have PowerShell 2.0 or higher on both computers and can enable remoting on the remote computer by execute Enable-PSRemoting -Force, then from an elevated/admin PowerShell prompt you can run:
Invoke-Command -ComputerName remotepcname -ScriptBlock { <commands to execute remotely> }
This will execute the commands remotely and return the results to the local computer.
Here's another alternative to try where psexec and powershell fail. It's convoluted and hackish, but at least it's something else to try. :)
Firstly, share a folder on your own machine. Make sure an account with admin rights on the remote machine has write access to this share you create. Then execute the following:
wmic /node:remoteComputerAddr /user:adminOnRemoteComputer /password:adminPassword process call create "cmd.exe /c powermt display dev=all >>\\localComputerAddr\shareName\results.txt"
#type "c:\local\path\to\share\results.txt"
Unfortunately, wmic doesn't show you the output of the process it creates. That's why you enable a share on your local workstation, then redirect the output from the remote command to your share.
More info.