I've just installed WSL2 and am using the Windows Terminal on Win10 1909 (18363.1256). I'm trying to set up 2 different profiles, one that launches a local WSL2 Ubuntu shell, and one that launches another WSL2 shell that will automatically ssh to a specific host.
The local one works great, shows up without an issue, however I can't seem to get my 2nd profile to show up in the list of profiles.
My settings.json looks like this:
"profiles":
{
"defaults":
{
// Put settings here that you want to apply to all profiles.
"colorScheme": "One Half Dark",
"fontFace": "JetbrainsMono NF",
"fontSize": 11
},
"list":
[
{
"guid": "{2c4de342-38b7-51cf-b940-2309a097f518}",
"hidden": false,
"name": "Ubuntu",
"source": "Windows.Terminal.Wsl",
"startingDirectory": "//wsl$/Ubuntu/home/sensanaty",
"tabTitle": "WSL2"
},
{
"guid": "{15c5814b-7ed1-4cec-bc64-d165274958fa}",
"hidden": false,
"name": "External Host",
"source": "Windows.Terminal.Wsl",
"commandline": "ssh example#123.456.7.89",
"tabTitle": "External Host"
},
]
},
With the above, I only get the Ubuntu profile in my list
I thought maybe it was the guid generated or something, but I just did a simple uuidgen and pasted it into the json so it shouldn't really be causing any issues there. I've also obviously tried restarting my system, to no avail. The default profiles show up fine if I disable the option to stop auto-generating them, as well.
Any clue as to what might help me out?
The 'source' attribute is for dynamically generated profiles, for which WSL will create one for each instance installed. You can't control the command line for these dynamically generated profiles. What you need is for your new profile to extend the command line to tell Terminal to use WSL. Remove the 'source' attribute entirely, so that your new profile is static.
In your case, that should be ...
{
"guid": "{15c5814b-7ed1-4cec-bc64-d165274958fa}",
"hidden": false,
"name": "External Host",
//"source": "Windows.Terminal.Wsl",
"commandline": "wsl.exe ssh example#123.456.7.89",
"tabTitle": "External Host"
}//,
As bwolfbarn mentioned, you should also ditch that trailing comma if it really comes at the end of the "list" block.
Here are a few lines from mine as additional examples as well ...
{
"guid": "{2c4de342-38b7-51cf-b940-2309a097f518}",
"hidden": false,
"name": "Ubuntu 20.04 WSL2 tmux",
//"source": "Windows.Terminal.Wsl",
"commandline": "wsl.exe -d Ubuntu -e sh -c \"/usr/bin/tmux has-session -t main 2>/dev/null && /usr/bin/tmux attach-session -d -t main || /usr/bin/tmux -2 new-session -t main -s main -c ${HOME}\"",
"cursorShape": "filledBox"
},
{
"guid": "{4e04fa7e-76c7-4746-a322-a227e70dde6c}",
"hidden": false,
"name": "Ubuntu 20.04 WSL1 tmux",
//"commandline": "wsl.exe -d Ubuntu20.04_WSL1",
"commandline": "wsl.exe -d Ubuntu20.04_WSL1 -e sh -c \"/usr/bin/tmux has-session -t main 2>/dev/null && /usr/bin/tmux attach-session -d -t main || /usr/bin/tmux -2 new-session -t main -s main -c ${HOME}\"",
"cursorShape": "filledBox"
}
Note that you could, I believe, use "wsl.exe -e" (a.k.a. --execute), but it's not really necessary in your case.
If you want to see your "source": "Windows.Terminal.Wsl" in Windows Terminal Menu it must exist in the registry
[HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Lxss\{UUID}]
(The registry UUID is not related to Windows Terminal UUID).
This registry entry can be created by running "wsl --import" or by cloning existing entry (if you are comfortable messing with the registry).
If you still don't see your profile after confirming that the registry entry exists, remove all entries for "generatedProfiles" in state.json file located in the same folder as settings.json. This will force Windows Terminal to update state.json. If you generated Windows Terminal profile UUID yourself, it may ignore it and create its own one. In this case you will see duplicate entries for the profile in settings.json. Remove the ones that were generated manually, and leave the one generated by the terminal.
At least the last comma should be removed (I commented it in your example) as the element "External Host" is the last of the list.
[
{
"guid": "{2c4de342-38b7-51cf-b940-2309a097f518}",
"hidden": false,
"name": "Ubuntu",
"source": "Windows.Terminal.Wsl",
"startingDirectory": "//wsl$/Ubuntu/home/sensanaty",
"tabTitle": "WSL2"
},
{
"guid": "{15c5814b-7ed1-4cec-bc64-d165274958fa}",
"hidden": false,
"name": "External Host",
"source": "Windows.Terminal.Wsl",
"commandline": "ssh example#123.456.7.89",
"tabTitle": "External Host"
}//,
]
Related
I'm using WSL2 on Windows terminal. I have an app that needs front end and backend booted before it can be used, so every time I nave to open a terminal window, navigate to a folder and run a command.
I would like to set an alias that would open a new tab, navigate to a folder and do go run .
I saw suggestions for linux, but none of those work on Windows Terminal with WSL2. Anyone have experience with this setup?
You could create a profile for this. Something like
{
"commandline": "wsl.exe -d Ubuntu ping 8.8.8.8",
"name": "backend",
"startingDirectory": "\\\\wsl$\\Ubuntu\\home\\zadjii\\path\\to\\project",
},
(of course, replace ping 8.8.8.8 with the actual command you want to run, replace Ubuntu with the name of the distro you're using, and replace home\\zadjii\\path\\to\\project with your actual path, delimited by double-backslashes.)
Now, if you wanted to get really crazy, you could create an action in the Command Palette which opened up multiple commands all at once:
{
"name": "Run my project",
"command": {
"action": "multipleActions",
"actions": [
// Create a new tab with two panes
{ "action": "newTab", "tabTitle": "backend", "commandline": "wsl.exe -d Ubuntu run_my_backend", "startingDirectory": "\\\\wsl$\\Ubuntu\\home\\zadjii\\path\\to\\backend" },
{ "action": "splitPane", "tabTitle": "frontend", "commandline": "wsl.exe -d Ubuntu run_my_frontend", "startingDirectory": "\\\\wsl$\\Ubuntu\\home\\zadjii\\path\\to\\frontend" },
]
}
}
see multipleActions
While working in Selenoid with Docker, in docker logs I can see the error as " [/usr/bin/selenoid: browsers config: read error: open /etc/selenoid/browsers.json: no such file or directory]" . My volume mapping is "-v $PWD/config/:/etc/selenoid/:ro" . if I do "cat $PWD/config/browsers.json" , my browsers.json content is opened and same I can validate manually as well that file is present .
Below commands I am using . These commands I am executing directly through Jenkins . In My local same exact command is working fine , but in jenkins its giving error .
mkdir -p config
cat <$PWD/config/browsers.json
{
"firefox": {
"default": "57.0",
"versions": {
"57.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
},
"58.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
},
"59.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
}
}
}
}
EOF
chmod +rwx $PWD/config/browsers.json
cat $PWD/config/browsers.json
docker pull aerokube/selenoid:latest
docker pull aerokube/cm:latest
docker pull aerokube/selenoid-ui:latest
docker pull selenoid/video-recorder:latest-release
docker pull selenoid/vnc_chrome:92.0
docker pull selenoid/vnc_firefox:90.0
docker stop selenoid ||true
docker rm selenoid ||true
docker run -d --name selenoid -p 4444:4444 -v /var/run/docker.sock:/var/run/docker.sock
-v $PWD/config/:/etc/selenoid/:ro aerokube/selenoid
The error is self-explaining: you don't have browsers.json in directory you are mounting to /etc/selenoid inside container. I would recommend using absolute paths instead of $PWD variable.
I am trying to build images with packer in a jenkins pipeline. However, the packer ssh provisioner does not work as the ssh never becomes available and error out with timeout.
Farther investigation of the issue shows that, the image is missing network interface files ifconfig-eth0 in /etc/sysconfig/network-scripts directory so it never gets an ip and does not accept ssh connection.
The problem is, there are many such images to be generated and I can't open each one manually in GUI of virtualbox and correct the issue and repack. Is there any other possible solution to that?
{
"variables": {
"build_base": ".",
"isref_machine":"create-ova-caf",
"build_name":"virtual-box-jenkins",
"output_name":"packer-virtual-box",
"disk_size":"40000",
"ram":"1024",
"disk_adapter":"ide"
},
"builders":[
{
"name": "{{user `build_name`}}",
"type": "virtualbox-iso",
"guest_os_type": "Other_64",
"iso_url": "rhelis74_1710051533.iso",
"iso_checksum": "",
"iso_checksum_type": "none",
"hard_drive_interface":"{{user `disk_adapter`}}",
"ssh_username": "root",
"ssh_password": "Secret1.0",
"shutdown_command": "shutdown -P now",
"guest_additions_mode":"disable",
"boot_wait": "3s",
"boot_command": [ "auto<enter>"],
"ssh_timeout": "40m",
"headless":
"true",
"vm_name": "{{user `output_name`}}",
"disk_size": "{{user `disk_size`}}",
"output_directory":"{{user `build_base`}}/output-{{build_name}}",
"format": "ovf",
"vrdp_bind_address": "0.0.0.0",
"vboxmanage": [
["modifyvm", "{{.Name}}","--nictype1","virtio"],
["modifyvm", "{{.Name}}","--memory","{{ user `ram`}}"]
],
"skip_export":true,
"keep_registered": true
}
],
"provisioners": [
{
"type":"shell",
"inline": ["ls"]
}
]
}
When you don't need the SSH connection during the provisioning process you can switch it off. See the packer documentation about communicator, there you see the option none to switch of the communication between host and guest.
{
"builders": [
{
"type": "virtualbox-iso",
"communicator": "none"
}
]
}
Packer Builders DOCU virtualbox-iso
I'm trying to build a Vagrant box file using Packer.io and Puppet.
I have this template as a starting point:
https://github.com/puphpet/packer-templates/tree/master/centos-7-x86_64
I added the Puppet provisioner after the shell provisioner:
{
"type": "puppet-masterless",
"manifest_file": "../../puphpet/puppet/site.pp",
"manifest_dir": "../../puphpet/puppet/nodes",
"module_paths": [
"../../puphpet/puppet/modules"
],
"override": {
"virtualbox-iso": {
"execute_command": "echo 'vagrant' | {{.FacterVars}}{{if .Sudo}} sudo -S -E bash {{end}}/usr/bin/puppet apply --verbose --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}"
}
}
}
When I start building the image like
packer-io build -only=virtualbox-iso template.json
Then I get this error:
==> virtualbox-iso: Provisioning with Puppet...
virtualbox-iso: Creating Puppet staging directory...
virtualbox-iso: Uploading manifest directory from: ../../puphpet/puppet/nodes
virtualbox-iso: Uploading local modules from: ../../puphpet/puppet/modules
virtualbox-iso: Uploading manifests...
virtualbox-iso:
virtualbox-iso: Running Puppet: echo 'vagrant' | sudo -S -E bash /usr/bin/puppet apply --verbose --modulepath='/tmp/packer-puppet-masterless/module-0' --manifestdir='/tmp/packer-puppet-masterless/manifests' --detailed-exitcodes /tmp/packer-puppet-masterless/manifests/site.pp
virtualbox-iso: /usr/bin/puppet: line 3: rvm: command not found
==> virtualbox-iso: Unregistering and deleting virtual machine...
==> virtualbox-iso: Deleting output directory...
Build 'virtualbox-iso' errored: Puppet exited with a non-zero exit status: 127
If I log in into the box via tty, I can run both rvm and puppet commands as vagrant user.
What did I do wrong?
I am trying out the exact same route as you are:
Use relevant scripts for provisioning the vm from this repo.
Use the puppet scripts from a puphpet.com configuration to further provision the vm using puppet-masterless provioner in packer.
Still working on it, not a successful build yet, but I can share the following:
Inspect line 50 from puphpet/shell/install-puppet.sh. So the puppet command will trigger rvm to be executed.
Inspect your packer output during provisioning. Your read something along the lines of:
...
Creating alias default for ruby-1.9.3-p551
To start using RVM you need to run `source /usr/local/rvm/scripts/rvm` in all
your open shell windows, in rare cases you need to reopen all shell windows.
Cleaning up rvm archives
....
Apparently the command source /usr/local/rvm/scripts/rvm is needed for each user that needs to run rvm. It is executed and set to bash profiles in the script puphpet/shell/install-ruby.sh. However, this does not seem to affect the context/scope of the puppet masterless provisioning execute_command of packer. Reason for the line /usr/bin/puppet: line 3: rvm: command not found in your output.
My current way forward is the following configuration in template.json (packer template), the second and third line will help get beyond the point where you are stuck currently:
{
"type": "puppet-masterless",
"prevent_sudo": true,
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\"",
"manifest_file": "./puphpet/puppet/site.pp",
"manifest_dir": "./puphpet/puppet",
"hiera_config_path": "./puphpet/puppet/hiera.yaml",
"module_paths": [
"./puphpet/puppet/modules"
],
"facter": {
"ssh_username": "vagrant",
"provisioner_type": "virtualbox",
"vm_target_key": "vagrantfile-local"
}
},
Note the following things:
Probably running puppet as vagrant user will not complete provisioning due to permission issues. In that case we need a way to run source /usr/local/rvm/scripts/rvm in a sudo and affect the scope of the puppet provisioning command.
The puphpet.com output scripts have /vagrant/puphpet hardcoded in their puppet scripts (e.g. puphpet/puppet/nodes/Apache.pp first line). So you might require a packer file provisioning to your vm before you execute puppet masterless, in order for it to find the dependencies in /vagrant/.... My packer.json conf for this:
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"mkdir /vagrant",
"chown -R vagrant:vagrant /vagrant"
]
},
{
"type": "file",
"source": "./puphpet",
"destination": "/vagrant"
},
Puppet will need some Facter variables as they are expected in the puphpet/puppet/nodes/*.pp scripts. Refer to my template.json above.
As said. No success in a complete puppet provisioning yet on my side, but the above got me beyond the point where you are stuck currently. Hope it helps.
Update:
I replaced my old execute command for puppet provisioner
"execute_command": "source /usr/local/rvm/scripts/rvm && {{.FacterVars}}{{if .Sudo}} sudo -E{{end}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}"
with a new one
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\""
This will ensure puppet (rvm) is running as root and finishes provisioning successfully.
As an alternative to my other answer, I hereby provide my steps and configuration to get this provisioning scenario working with packer & puphpet.
Assuming the following to be in place:
./: a local directory acting as your own repository being
./ops/: a directory ops inside which holds packer scripts and required files
./ops/template.json: the packer template used to build the VM
./ops/template.json expects the following is in place:
./ops/packer-templates/: a clone of this repo
./ops/ubuntu-14.04.2-server-amd64.iso: the iso for the ubuntu you want to have running in your vm
./puphpet: the output of walking through the configuration steps on puphpet.com (so this is one level up from ops)
The contents of template.json:
{
"variables": {
"ssh_name": "vagrant",
"ssh_pass": "vagrant",
"local_packer_templates_dir": "./packer-templates/ubuntu-14.04-x86_64",
"local_puphput_dir": "../puphpet",
"local_repo_dir": "../",
"repo_upload_dir": "/vagrant"
},
"builders": [
{
"name": "ubuntu-14.04.amd64.virtualbox",
"type": "virtualbox-iso",
"headless": false,
"boot_command": [
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg ",
"debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
"hostname={{ .Name }} ",
"fb=false debconf/frontend=noninteractive ",
"keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA keyboard-configuration/variant=USA console-setup/ask_detect=false ",
"initrd=/install/initrd.gz -- <enter>"
],
"boot_wait": "10s",
"disk_size": 20480,
"guest_os_type": "Ubuntu_64",
"http_directory": "{{user `local_packer_templates_dir`}}/http",
"iso_checksum": "83aabd8dcf1e8f469f3c72fff2375195",
"iso_checksum_type": "md5",
"iso_url": "./ubuntu-14.04.2-server-amd64.iso",
"ssh_username": "{{user `ssh_name`}}",
"ssh_password": "{{user `ssh_pass`}}",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo '/sbin/halt -h -p' > shutdown.sh; echo '{{user `ssh_pass`}}'|sudo -S bash 'shutdown.sh'",
"guest_additions_path": "VBoxGuestAdditions_{{.Version}}.iso",
"virtualbox_version_file": ".vbox_version",
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "2048"],
["modifyvm", "{{.Name}}", "--cpus", "4"]
]
}
],
"provisioners": [
{
"type": "shell",
"execute_command": "echo '{{user `ssh_pass`}}'|sudo -S bash '{{.Path}}'",
"scripts": [
"{{user `local_packer_templates_dir`}}/scripts/base.sh",
"{{user `local_packer_templates_dir`}}/scripts/virtualbox.sh",
"{{user `local_packer_templates_dir`}}/scripts/vagrant.sh",
"{{user `local_packer_templates_dir`}}/scripts/puphpet.sh",
"{{user `local_packer_templates_dir`}}/scripts/cleanup.sh",
"{{user `local_packer_templates_dir`}}/scripts/zerodisk.sh"
]
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"mkdir {{user `repo_upload_dir`}}",
"chown -R vagrant:vagrant {{user `repo_upload_dir`}}"
]
},
{
"type": "file",
"source": "{{user `local_repo_dir`}}",
"destination": "{{user `repo_upload_dir`}}"
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"rm -fR {{user `repo_upload_dir`}}/.vagrant",
"rm -fR {{user `repo_upload_dir`}}/ops"
]
},
{
"type": "puppet-masterless",
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\"",
"manifest_file": "{{user `local_puphput_dir`}}/puppet/site.pp",
"manifest_dir": "{{user `local_puphput_dir`}}/puppet",
"hiera_config_path": "{{user `local_puphput_dir`}}/puppet/hiera.yaml",
"module_paths": [
"{{user `local_puphput_dir`}}/puppet/modules"
],
"facter": {
"ssh_username": "{{user `ssh_name`}}",
"provisioner_type": "virtualbox",
"vm_target_key": "vagrantfile-local"
}
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"echo '{{user `repo_upload_dir`}}/puphpet' > '/.puphpet-stuff/vagrant-core-folder.txt'",
"sudo bash {{user `repo_upload_dir`}}/puphpet/shell/important-notices.sh"
]
}
],
"post-processors": [
{
"type": "vagrant",
"output": "./build/{{.BuildName}}.box",
"compression_level": 9
}
]
}
Narration of what happens:
execute the basic provisioning of the VM using the scripts that are used to build puphpet boxes (first shell provisioner block)
create a directory /vagrant in the VM and set permissions for vagrant user
upload local repository to /vagrant (important as puphpet/puppet expects it to exist at that location in its scripts)
remove some unneeded stuff from /vagrant after upload
start puppet provisioner with custom execute_command and facter configuration
process the remaining provisioning scripts. To be extended with exec once/always, start once/always files
Note: you might need to prepare some more things before the puppet provisioner kicks off. E.g. I need a directory in place that will be the docroot of a vhost in apache. Use shell provisioning to complete the template for your own puphpet configuration.
I've got both Leiningen & Clojure working on Windows 8 separately from Sublime Text (e.g. I can get a repl working in Windows PowerShell).
My problem is that I can't get the SublimeREPL working in SublimeText (the REPL loads up but doesn't then do anything). Are there any simple traps that I might be missing or, failing that, are there a series of steps I could follow to troubleshoot?
Please see this SublimeREPL issue for instructions on how I got a Clojure REPL to work, at least on XP (I haven't tried it on Win7 or 8 yet). Basically, I edited the menu file for Clojure, and changed the command from lein repl to lein trampoline run -m clojure.main, which for some reason did the trick. I also changed the path to $file so you can open up a REPL while your project.clj is the current tab in Sublime, and the REPL should inherit the project's settings.
For reference, the complete Packages/User/SublimeREPL/config/Clojure/Main.sublime-menu file (Packages is accessible via Preferences -> Browse Packages...) is as follows:
[
{
"id": "tools",
"children":
[{
"caption": "SublimeREPL",
"mnemonic": "r",
"id": "SublimeREPL",
"children":
[
{"caption": "Clojure",
"id": "Clojure",
"children":[
{"command": "repl_open",
"caption": "Clojure Trampoline",
"id": "repl_clojure",
"args": {
"type": "subprocess",
"encoding": "utf8",
"cmd": {"windows": ["lein.bat", "trampoline", "run", "-m", "clojure.main"],
"linux": ["lein", "repl"],
"osx": ["lein", "repl"]},
"soft_quit": "\n(. System exit 0)\n",
"cwd": {"windows":"$file_path",
"linux": "$file_path",
"osx": "$file_path"},
"syntax": "Packages/Clojure/Clojure.tmLanguage",
"external_id": "clojure",
"extend_env": {"INSIDE_EMACS": "1"}
}
},
{"command": "clojure_auto_telnet_repl",
"id": "repl_clojure_telnet",
"caption": "Clojure-Telnet"}]}
]
}]
}
]
I solved this problem with Git Bash Shell. I have used the shell script version of leiningen instead of lein.bat
This is the command I use:
["C:\\Program Files\\Git\\bin\\sh.exe", "-l", "--", "/d/lein.sh", "repl"]
assuming lein.sh is in d:\
lein trampoline command sometimes behaves differently from lein repl and could fail due to unknown reasons.