I am trying to deploy an OVA file using PowerCLI on my laptop. The script works if the -Source is on a UNC share or in this case $ovfpath is a mapped drive on my laptop. But what this does is drag the 12gb ova file across the network every time a new vm gets created. What I would like is to have the -Source on the datastore and only have to copy it across the WAN 1 time. I've tried using https:\host.... but the script fails. If I use the vSphere GUI to deploy from template and use the HTTPS url it works. Any ideas for how to access the -Source from a datastore?
$ovfpath = Get-ChildItem z:\
$myDatastore = Get-Datastore -Name "Datastore2"
$vmHost = Get-VMHost -Name "$newHost"
$vmHost | Import-vApp **-Source "$ovfpath\Win2012_R2_Std.ova"** -Name newVM01 -Datastore $myDatastore -Force
can you not import the OVF, convert the imported VM to a template and then deploy additional VMs from the template?
That way it will not use the network each time.
Related
How can I change the default location for storing Docker images in Windows? I currently have Docker installed on my C: drive, and the images are stored in the following location:
C:\Users\xxxxx\AppData\Local\Docker\wsl\data.
I want to change the default location to my D: drive. I am using WSL2 as the backend for Docker, and I have read that I can use the .wslconfig file to configure Docker. However, I am not sure how to set up the .wslconfig file to change the default image location. My WSL2 installation is located on my D: drive, which I installed from the Microsoft Store.
I'm using Docker version 20.10.21 and these are wsl specs
WSL version: 1.0.3.0
Kernel version: 5.15.79.1
WSLg version: 1.0.47
MSRDC version: 1.2.3575
Direct3D version: 1.606.4
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.22000.1335
I'm using Ubuntu distro in WSL, and Docker Desktop v.4.15.0
I tried making some changes in .wslconfig but there was no option for storage or something.
Caveats/Preface:
I've tried this and it works, but I cannot guarantee that long-term it will continue to work. There's the potential that something will break when Docker Desktop upgrades in the future.
In general I don't recommend registry hacks, but I'm not aware of another way to do this. Other than the previous caveat, this seems fairly safe.
No, there's no .wslconfig option for changing the location of a distribution.
With that in mind, here's what I did to move docker-desktop-data to the D: drive:
Create the directory. I'll use D:\wsl\docker-desktop-data as an example.
Stop Docker Desktop by right-clicking on the status bar icon and Quit Docker Desktop.
From PowerShell:
wsl --shutdown
Confirm the location (BasePath) and registry key (PSChildName) of the docker-desktop-data via:
Get-ChildItem HKCU:\Software\Microsoft\Windows\CurrentVersion\Lxss\ |
ForEach-Object {
(Get-ItemProperty $_.PSPATH)
} | Where-Object {
$_.DistributionName -eq "docker-desktop-data"
}
Move ext4.vhdx from the BasePath directory identified above to the D:\wsl\docker-desktop-data directory.
In regedit, navigate to:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Lxss
Find the subkey matching the PSChildName from above.
Modify the BasePath to point to \\?\D:\wsl\docker-desktop-data
Restart Docker Desktop
Test that your existing images are still available by running one of them.
I've been trying to get access to Windows Server 2019 without password through OpenSSH protocol.
So I've created new key which I need it to be copied to the Windows Server, I've tried this:
ssh-copy-id -i ~/.ssh/id_rsa user#server
But I get this after entering correct password:
'exec' is not recognized as an internal or external command,
operable program or batch file.
The system cannot find the path specified.
The system cannot find the path specified.
My issue is how to transfer key from one windows machine(using gitbash, WSL, powershell or whatever)
to Windows Server 2019 location of authorized keys if I am not mistaken.
I am desperate enough to do it manually but location of those keys is mystery to me, do I need to set something on Windows Server first so that it can accept keys for authentication ?
What is the alternative on ssh-copy-id from Windows machine to Windows Server 2019 ?
Found solution:
Followed this helpful youtube guide, props to the
https://www.youtube.com/watch?v=Cs3wBl_mMH0&ab_channel=IT%2FOpsTalk-Deprecated-SeeChannelDescription
Also, installing OpenSSHUtils worked with:
Install-Module -Name OpenSSHUtils -RequiredVersion 0.0.2.0 -Scope AllUsers
Also this guide helped:
https://www.cloudsma.com/2018/03/installing-powershell-modules-on/
My server didn't have access so I manually copied file from:
C:\Program Files\WindowsPowerShell\Modules to the server's:
Server:\Program Files\WindowsPowerShell\Modules
First, this error message is followed by microsoft/vscode-remote-release issue 25
Current workaround (the context is VSCode, but should apply also for regular SSH connection):
Also, for anyone else here that loves their bash on windows but still wants to be able to use VSCode remote, the workaround I have currently setup is to use an autorun.cmd deployed on the servers that detects when an SSH connection is coming in and has a terminal allocated:
#echo off
if defined SSH_CLIENT (
:: check if we've got a terminal hooked up; if not, don't run bash.exe
C:\cygwin\bin\bash.exe -c "if [ -t 1 ]; then exit 1; fi"
if errorlevel 1 (
C:\cygwin\bin\bash.exe --login
exit
)
)
This is known to work with Cygwin bash, unsure about bash that ships with windows; I imagine it's very sensitive to how the TTY code works internally.
This way, launching cmd.exe works normally, using VSCode (because it does not allocate a PTY) works normally, but SSH'ing into the machine launches bash.exe.
I suspect it would also work using the bash.exe which comes with Git for Windows, should it be installed on the target server.
The destination file should be on the server:
%USERPROFILE%\.ssh\authorized_keys
If you can do it manually, simply try and scp it instead of using ssh-copy-id
scp user#server:C:/Users/<user>/.ssh/authorized_key authorized_key
# manual and local edit to add the public key
scp authorized_key user#server:C:/Users/<user>/.ssh/authorized_key
(again, I would use the scp.exe coming with Git For Windows, installed this time locally)
Found solution:
Followed this helpful youtube guide, props to the
https://www.youtube.com/watch?v=Cs3wBl_mMH0&ab_channel=IT%2FOpsTalk-Deprecated-SeeChannelDescription
Also, installing OpenSSHUtils worked with:
Install-Module -Name OpenSSHUtils -RequiredVersion 0.0.2.0 -Scope AllUsers
Also this guide helped:
https://www.cloudsma.com/2018/03/installing-powershell-modules-on/
My server didn't have access so I manually copied file from:
C:\Program Files\WindowsPowerShell\Modules to the server's:
Server:\Program Files\WindowsPowerShell\Modules
I am currently administering a Selenium Grid with 20 remote PCs acting as nodes to a single Hub located on a server. At the moment I have to remote in to each machine when I want to restart the hub or nodes and clear up any stale chromedriver or chrome instances. I am trying to automate this process via Powershell.
So far I have manage to write the ps scripts to kill any instances of chrome, chromedriver and java on the PCs and then restart the hub or node. They work when started locally on each machine when but fail when I try and execute them via a PSSession.
I have enabled remote sessions on each machine successfully and I can Invoke-Commands that will kill the existing instances of java and chrome but I can't restart the hub or nodes.
Example of Hub powershell script:
#This script kills any existing java process and runs StartHub.bat
Set-Location C:\Selenium
kill -Name java -Force -PassThru -ErrorAction Continue
Start-Process -FilePath "C:\Selenium\StartHub.bat" -PassThru -Verbose
The bat file is as follows:
java -jar C:\Selenium\selenium-server-standalone-3.4.0.jar -role hub -hubConfig "V:\ServerFiles\hubconfig.json"
I have been testing with the execution policy unrestricted and my network administrator has changed GPO's to allow my to start java processes remotely but it's just not working. I've tried several approaches which I have listed below:
1: Entering a PSSession on remote server and calling the ps1 file:
C:\RestartHub.ps1
The result being that the existing hub instance is killed but a new one does not open.
2: I have then tried to Start a job with a ScriptBlock calling the cmd script to start the batch file:
Set-Location C:\Selenium
kill -Name java -Force -PassThru -ErrorAction Continue
Start-Job -ScriptBlock{cmd /c start "C:\Selenium\StartHub.bat"} -Name Hub -Verbose
This again kills the existing hub instance but the start script does not run or fails silently.
I have looked through the security logs on the remote machine to see if there are any issues there but the PSSession seems to be correct using the right user with full admin rights.
I have also changed the ExecutionPolicy on the remote machine to restricted to see if an access denied error is display, which it was. I changed back to unrestricted and error went away.
I'd be grateful for any ideas.
Start-Process will start a process from an executable, you cannot use a bat file as an executable, -FilePath expects an executable's path
See below,
Start-Process cmd -Argumentlist "C:\Selenium\StartHub.bat" -PassThru -Verbose
I want to deploy a new VM with my vmdk file in vcenter environment from CLI. So ssh to esx server is not an option. Is there any way I can do this .I know there is some vmware perl sdk but I could not find exactly what I need to get this working. I know The same operation is possible from GUI, but I need to make it automated and also scale up so gui is not an option for me.
Can you please be more specific? This for a Linux or Windows CLI?
The simplest way to do this is via PowerCLI: https://my.vmware.com/web/vmware/details?downloadGroup=PCLI550&productId=352
To clone from an existing VM, the command would be:
$vm2 = New-VM -Name VM2 -VM VM1 -Datastore $datastore -VMHost $host
Also, when you say you want to create from VMDK, is the VMDK already on the target datastore? Do you need to import the VMDK first?
Normally, when you create a VM, you can either create a new "blank" VM or you can clone from an existing VM. If you have a VMDK you want to use, you would create an empty VM and then attach the VMDK. This assumes that you already have the VMDK in question loaded into a datastore that the host can access.
We are running mlcp.sh in distributed mode on cdh5.2.4, the job is always running in local its not submitting to yarn/resource manager. does anyone successfully implement mlcp on cdh5+?
we are using marklogic-contentpump-1.0.5.jar
bin/mlcp.sh export
-host xxx.xx.xx.xxx
-port xxxx
-username <user>
-password xxxxx
-output_type sequence
-compress_type record
-output_file_path /tmp
-mode distributed
-job_queue cp11
-query_type unfiltered
-max_split_size 500
-query_config file.properties
-after_ts 2015-01-01T16:55:05-04:00
-before_ts 2015-04-10T17:55:37-04:00
-perm_path /data/mlcp
Fixed after changing from client-0.20 to client for yarn
Using JAR Files Provided in the hadoop-client Package
Make sure you add to your project all of the JAR files provided under /usr/lib/hadoop/client-0.20 (for MRv1 APIs) or /usr/lib/hadoop/client (for YARN).
For example, you can add this location to the JVM classpath:
$ export CLASSPATH=/usr/lib/hadoop/client-0.20/\*