Whenever I try to connect to VM using virsh console <vm name> my screen hangs and displays:
Connected to domain <vm name>
Escape character is ^]
I have found many solutions on the internet but nothing has worked for me and I am even not able to find the /etc/init directory as CentOS 7 has a different directory structure.
I need /etc/init directory to create a script which I found on the internet as a solution.
I am using only ssh connection and no GUI and I do not have any access to the physical machine.
I think you should start a console (e.g. ttyS0 ).
For example on my Debian 8 I enable it with systemd:
systemctl enable getty#tty1.service
Enable Serial Console on CentOS/RHEL 7
On the virtual machine, add ‘console=ttyS0‘ at the end of the kernel lines in the /boot/grub2/grub.cfg file:
grubby --update-kernel=ALL --args="console=ttyS0"
Note: Alternatively, you can edit the /etc/default/grub file, add console=ttyS0 to the GRUB_CMDLINE_LINUX variable and execute
grub2-mkconfig -o /boot/grub2/grub.cfg
GRUB_TERMINAL=serial
GRUB_SERIAL_COMMAND="serial –speed115200 –unit=0 –word=8 –parity=no –stop=1"
I had the same issue right after virt-install, then after trying to connect to the guest, too. I tried all the suggested solutions but none of them helped. Then I realized that I forgot to install KVM. A simple 'yum -y install kvm' resolved the issue.
Related
I'm "extremely" new to Kubernetes, and I wanted to try it out on my local machine, which is running Windows 10 along with HyperV. I saw that minikube is used for local development, and I was able to find in on Chocolatey, so I installed it using that:
choco install minikube -y
(I think this also installs kubectl)
The problem I have is that I'm not able to start it; I'm running the following command:
minikube start --vm-driver=hyperv
I have an external switch configured in HyperV (I found it as a suggestion somewhere), but when I run the command, it's stuck in Creating VM ...
I thought maybe it would give me a clue if I look at the VM created in HyperV, and when I open that, I see the following:
So, it seems that it's waiting for input, and that's why it's stuck! I tried searching for the problem, but to no avail.
I would appreciate any help
PS: It seems to me that if I wait long enough, the following message appears on the console:
Temporary Error: provisioning: error getting ssh client: Error dialing
tcp via ssh client: ssh: handshake failed: ssh: unable to authenticate,
attempted methods [none publickey], no supported methods remain
So, somehow by chance, I think I found how to resolve the issue.
First thing is that: the fact the VM is displaying that prompt (minikube login) seems to be normal, and it does NOT prevent the minikube start from succeeding.
To resolve the issue, this is what I did:
Delete ~/.kube directory
Delete ~/.minikube directory (in case it exists)
The MOST IMPORTANT step: stop/start the Hyper-V Virtual Machine Management Windows service
These steps seem to have solved the issue for me
PS: I used this command to start minikube and enable verbose logging:
minikube start --vm-driver hyperv -v 7 --alsologtostderr
Farzad, what resources have you used for setting up the minikube? Can you please clarify what do you mean by "unable to start". Are the regular kubectl commands working?
For example kubectl get nodes? That is of course if below steps won't help you.
The screenshot you shared shows a running VM:
Minikube runs a single-node Kubernetes cluster inside a VM on your
laptop for users looking to try out Kubernetes or develop with it
day-to-day.
You mentioned that you've created the vSwitch, you should be using a flag that is pointing minikube to use that external vSwitch:
minikube start --vm-driver hyperv --hyperv-virtual-switch "vSwitch name"
You also mentioned choco, did you install kubernetes-cli (as you did not mention it in the question)? It might be the reason why your commands do not work (seems like the new version downloads kubectl with choco install minikube):
kubectl is a command line interface for running commands against
Kubernetes clusters
At this moment I recommend stoping the minikube VM:
minikub stop
Delete the cluster
minikube delete
Sometimes regular minikube stop, minikube delete does not work so you might have to manually turn off the minikubeVM in Hyper-V, then I recommend to go to c:\users\%username%\ and delete .kube and .minikube.
Use cuninst minikube
Restart and install again as specified in the minikube documentation:
choco install minikube
choco install kubernetes-cli
As for the error you mentioned, let's try to run the cluster properly, and if this persists, we will take care of it.
Try this:
kubectl config use-context minikube
I encountered the same issue. The reason was I chose the wrong disk file to start my VM after creating it in Virtual Box.
This solved my issue.
minikube delete
minikube start --vm-driver hyperv -v 7 --alsologtostderr
I have a dual-boot setup with Windows 10 and Kubuntu 18. Following instructions found from here and there I managed to get the Windows to run as guest in Kubuntu host as a VM using VirtualBox.
sudo usermod -a -G disk $USER
VBoxManage internalcommands createrawvmdk -filename "/path/to/vm/win10.vmdk" -rawdisk /dev/sda -partitions 1,3,4 -relative
The first line is to avoid running VirtualBox as superuser.
When I boot the VM, I briefly see an error message
Boot Failed. EFI DVD/CDROM
SystemBootOrder not found. Initializing defaults.
Creating boot entry "Boot0003" with label "ubuntu" for file "\EFI\ubuntu\shimx64.efi"
and then end up in grub shell. Now, when I run the commands
insmod chain
set root=(hd0,gpt1)
chainloader /EFI/Microsoft/Boot/bootmgfw.efi
boot
Windows boots and works just fine but entering these every time is not exactly smooth workflow. Any idea how to permanently fix this?
Please note that I'd still like to be able to physically boot into both OS's.
Thanks,
I had the same problem. I fixed it, but then updated my kernel and so grub re-un-fixed it for me! Figuring it out for the second time was quicker, but I figured it'd be even quicker next time to find my answer on StackOverflow!
My grub.cfg file in /boot/efi/EFI/ubuntu looked like this:
search.fs_uuid 47d6233f-c0ae-4f89-bf18-184452eac803 root hd0,gpt6
set prefix=($root)'/boot/grub'
configfile $prefix/grub.cfg
Because we have setup the VirtualBox vmdk file with only the selected partitions for Windows to work, the search.fs_uuid command was failing, $root was empty and so grub can't find $prefix/grub.cfg (/boot/grub/grub.cfg in my linux rootfs which is on sda6==gpt6)
I automated it by changing the EFI grub.cfg, note my EFI System partition is 2 not 1 as in your example:
search.fs_uuid 47d6233f-c0ae-4f89-bf18-184452eac803 root hd0,gpt6
set prefix=($root)'/boot/grub'
if [ -f $prefix/grub.cfg ]
then
configfile $prefix/grub.cfg
else
insmod chain
set root=(hd0,gpt2)
chainloader /EFI/Microsoft/Boot/bootmgfw.efi
boot
fi
Now if grub can find the cfg file it will give me the menu to select the boot as before, but if it can't - when I'm in VirtualBox - it'll just boot straight into Win10.
Hope this helps!
The Question
I'm trying to enable X11 forwarding through the PyCharm SSH Terminal which can be executed via
"Tools -> Start SSH session..."
Unfortunately, It seems there is no way of specifying the flags like I would do in my shell for enabling the X11 Forwarding:
ssh -X user#remotehost
Do you know some clever way of achieving this?
Current dirty solution
The only dirty hack I found is to open an external ssh connection with X11 forwarding and than manually update the environment variable DISPLAY.
For example I can run on my external ssh session:
vincenzo#remotehost:$ echo $DISPLAY
localhost:10.0
And than set on my PyCharm terminal:
export DISPLAY=localhost:10.0
or update the DISPLAY variable in the Run/Debug Configuration, if I want to run the program from the GUI.
However, I really don't like this solution of using an external ssh terminal and manually update the DISPLAY variable and I'm sure there's a better way of achieving this!
Any help would be much appreciated.
P.s. Making an alias like:
alias ssh='ssh -X'
in my .bashrc doesn't force PyCharm to enable X11 forwarding.
So I was able to patch up jsch and test this out and it worked great.
Using X11 forwarding
You will need to do the following to use X11 forwarding in PyCharm:
- Install an X Server if you don't already have one. On Windows this might be the VcXsrv project, on Mac OS X the XQuartz project.
- Download or compile the jsch package. See instructions for compilation below.
- Backup jsch-0.1.54.jar in your pycharm's lib folder and replace it with the patched version. Start Pycharm with a remote environment and make sure to remove any instances of the DISPLAY environment variable you might have set in the run/debug configuration.
Compilation
Here is what you need to do on a Mac OS or Linux system with Maven installed.
wget http://sourceforge.net/projects/jsch/files/jsch/0.1.54/jsch-0.1.54.zip/download
unzip download
cd jsch-0.1.54
sed -e 's|x11_forwarding=false|x11_forwarding=true|g' -e 's|xforwading=false|xforwading=true|g' -i src/main/java/com/jcraft/jsch/*.java
sed -e 's|<version>0.1.53</version>|<version>0.1.54</version>|g' -i pom.xml
mvn clean package
This will create jsch-0.1.54.jar in target folder.
Update 2020:
I found a very easy solution. It may be due to the updated PyCharm version (2020.1).
Ensure that X11Forwarding is enabled on server: In /etc/ssh/sshd_config set
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
On client (MacOS for me): In ~/.ssh/config set
ForwardX11 yes
In PyCharm deselect Include system environment variables. This resolves the issue that the DISPLAY variable gets set to the system variable.
EDIT: As seen in the below image it works. For example I used the PyTorch implementation of DeepLab and visualize sample images from PASCAL VOC:
X11 forwarding was implemented in 2021.1 for all IntelliJ-based IDEs. If it still doesn't work, please consider creating a new issue at youtrack.jetbrains.com.
By the way, the piece of advice about patching jsch won't work for any IDE newer than 2019.1.
In parallel, open MobaXTerm and connect while X11 forwarding checkbox is enabled. Now PyCharm will forward the display through MobaXTerm X11 server.
This until PyCharm add this 'simple' feature.
Also, set DISPLAY environment variable in PyCharm run configuration like this:
DISPLAY=localhost:10.0
(the right hand side should be obtained with the command echo $DISPLAY in the server side)
Update 2022: for PyCharm newer than 2022.1: Plotting in SciView works by only setting ForwardX11 yes in .ssh/config (my laptop OS is ubuntu 22.04). I did not set any other parameters either on the server or local side.
So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin
The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.
Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file
I want to preface this question by mentioning that I have indeed looked over most if not all vagrant "Waiting for VM to Boot" troubleshooting threads:
Things I've tried include:
vagrant failed to connect VM
https://superuser.com/questions/342473/vagrant-ssh-fails-with-virtualbox
https://github.com/mitchellh/vagrant/issues/410
http://vagrant.wikia.com/wiki/Usage
http://scotch.io/tutorials/get-vagrant-up-and-running-in-no-time
And more.
Here's how I setup my Vagrant:
Note: We are using Vagrant 1.2.2 since we do not at the moment have time to change configs to newer versions. I am also using VirtualBox 4.2.26.
My office has an /official/ folder which includes things such as Vagrantfile inside. Inside my Vagrantfile are these custom settings:
config.vm.box = "my_box"
config.ssh.private_key_path = "~/.ssh/github_rsa"
config.ssh.forward_agent = true
config.ssh.forward_x11 = true
config.ssh.max_tries = 300
config.vm.provision :shell, :inline => "/etc/init.d/networking restart"
I installed our custom box (called package.box) via vagrant box add my_box absolute_path/package.box which went without a hitch.
Running vagrant up, I would look at the "preview" of the VirtualBox, and it would simply be stuck at the login page. My Terminal would also only say: Waiting for VM to boot. This can take a few minutes. As far as I know, this is an SSH issue. Or my private key issues, though in my Vagrantfile I explicitly pointed to my private key location.
Interesting Notes:
Running dhclient within the VirtualBox GUI, it says command no found. Running sudo dhclient eth0 was one of the suggested fixes.
This fix: https://superuser.com/a/343775/298915 of "modify the /etc/rc.local file to include the line sh /etc/init.d/networking restart just before exit 0." did nothing to fix the issue.
Conclusion:
Having tried to re-install everything thinking I messed up a file, it did not seem to ameliorate the issue. I am unable to work with this issue. Could someone give me some insight?
So after around twelve hours of dejected troubleshooting, I was able to (finally) get the VM to boot.
Setup your private/public keys using the link provided. My box is a Debian Linux 3.2.0-4-amd64, so instead of /root/.ssh/id_rsa.pub, you have to use /home/vagrant/.ssh/id_rsa.pub (and the respective id_rsa path for the private key).
Note: make sure your files have the right permissions. Check using ls -l path, and change using chmod. Your machine may not have /home/vagrant/.ssh/authorized_keys, so generate that file with touch /home/vagrant/.ssh/authorized_keys.
Boot your VM using the VirtualBox GUI using (through either Vagrantfile boot-GUI command, or starting your VM using VirtualBox). Login using vagrant and vagrant when prompted.
Within the GUI, manually start dhclient using sudo dhclient eth0 -v. Why is it off by default? I have no idea. I found out that it was off when I tried to wget the private/public keys in the tutorial above, but was unable to.
Go to your local machine's command line and reload vagrant using vagrant reload. It should boot, and no longer hang at "Waiting for VM to Boot."
This worked for me. Though it may be different for other machines, for whatever reason Vagrant likes to break.
Suggestion: can this be saved as a script so we don't need to manually do this everytime?
EDIT: Update to the latest version of Vagrant, and you will never see this issue again. About time, huh?