Crashplan on FreeNAS missing /var/lib/crashplan/.ui_info - nas

So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin

The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.

Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file

Related

Can't connect VS Code to Linux machine for remote development

I am getting this error on VS Code and have no clue why it fails
[15:14:59.543] Log Level: 2
[15:14:59.555] remote-ssh#0.51.0
[15:14:59.555] win32 x64
[15:14:59.560] SSH Resolver called for "ssh-remote+xx.xx.xx.xx", attempt 1
[15:14:59.561] SSH Resolver called for host: xx.xx.xx.xx
[15:14:59.561] Setting up SSH remote "xx.xx.xx.xx"
[15:14:59.621] Using commit id "0ba0ca52957102ca3527cf479571617f0de6ed50" and quality "stable" for server
[15:14:59.624] Install and start server if needed
[15:15:01.964] getPlatformForHost was canceled
[15:15:01.965] Resolver error: Connecting was canceled
[15:15:01.973] ------
Add one key in your settings.json as below. Please remember to replace the $remote_server_name to yours.
"remote.SSH.remotePlatform": {
"$remote_server_name": "linux"
}
Menu: File->Preference->Settings
Or click the icon to open settings.json:
In dialog box where you have typed user#host type/select Linux/Windows/etc. depends what you are using, then type/select Continue, then type password for remote session.
For those getting this error on Windows: Check if you have multiple ssh clients installed.
How I solved it was by adding my ssh-configuration to ALL ssh-config files.
In my case I had one in
C:\Users\USER_NAME.ssh\config (this is the one that the remote extension used to give me connection options)
and another in C:\Program Data\ssh\ssh-config
After adding my ssh-config setting to both I got the prompt to select virtual hosts' OS. Tried editing the settings.json file directly, but I think it gets confused because of the multiple ssh-configurations.
P.S.
Tested it for both private key and password enabled connections and it work with either.
I got a similar problem, but the error logs were bigger. Before that, I deleted the python and reinstalled it. Perhaps this led to the problem. Just reinstalled "Remote -SSH" extension in vscode and it worked for me.
In my case there were two files that look like
vscode-remote-lock.<user>.<xxx>
vscode-remote-lock.<user>.<xxx>.target
where was my remote user name and xxx the VS Code Remote Server build hash.
These two files on the remote server in the folder.
/run/user/1000/
I deleted both files and then VS Code came up right away. I have encountered this a few times now. VS Code Remote Server install is not very robust. I use it on about 7 remote machines and every once in a while something goes awry and it cannot recover from simple errors and gets stuck in installation loops.
This trick only works if there is a valid ~/.vscode-server on the remote machine with a hash that matches your local VS Code installation.
If you got here because you were trying to install VS Code in the first place and for whatever reason VS Code had issues with the remote installation, I highly recommend installing it manually by downloading and extracting the tar file to the remote machine directly.
I have tried playing with the setting "Use remote.SSH: Use Flock" and other tricks posted on StackOverflow but none of these work for me whenever I have remote installation issues. I cannot figure out why on some machines, a smooth remote installation is not possible. Even when all of my ssh keys and remote ids have been copied and tested from both the Windows command line and inside a WSL Ubuntu instance.
If VS Code Remote Server installation had slightly better error logic and better error messages none of us would be wasting hours doing this simple task.
I was getting the exact same error as the original poster received and yet none of the other answers were my issue.

"Windows Subsystem for Linux has no installed distributions" even though 'Ubuntu' is installed

I recently moved my wsl directory to another drive due to low storage in C: drive. As per the answer provided in this StackOverflow post, I used lxrunoffline tool and moved my Ubuntu distribution to another drive (E:\wsl in my case). As soon as the distribution was moved successfully, I ran wsl to test and it worked like a charm.
Everything went fine until one day I accidentally renamed the E:\wsl folder to something else. Well, as expected, wsl didn't work. Then, I reverted back to the name wsl and expected it to work but to my surprise, it didn't find any installed distribution after that even though it's installed... 😕
E:> wsl
Windows Subsystem for Linux has no installed distributions.
Distributions can be installed by visiting the Microsoft Store:
https://aka.ms/wslstore
Is there any way to revert back to the old directory or make wsl point to a manual location?
EDIT: I don't want to reset Ubuntu as I want to retain the installed packages and preferences...
Well, I finally found a solution to this problem. 😊
This is as simple as registering the distribution using lxrunoffline tool using the rg or register command.
E:\LxRunOffline\LxRunOffline-v3.3.3>lxrunoffline rg
[ERROR] the option '-d' is required but missing
Options:
-n arg Name of the distribution
-d arg The directory containing the distribution.
-c arg The config file to use. This argument is optional.
After running the register command, I was able to start wsl as usual. But that would log you in as a "root" user and would thus start in "/root" directory. I ran the following command to start wsl as different user (this is for Ubuntu):
ubuntu config --default-user <user-name>

virsh console hangs whenever I connect to Virtual Machine

Whenever I try to connect to VM using virsh console <vm name> my screen hangs and displays:
Connected to domain <vm name>
Escape character is ^]
I have found many solutions on the internet but nothing has worked for me and I am even not able to find the /etc/init directory as CentOS 7 has a different directory structure.
I need /etc/init directory to create a script which I found on the internet as a solution.
I am using only ssh connection and no GUI and I do not have any access to the physical machine.
I think you should start a console (e.g. ttyS0 ).
For example on my Debian 8 I enable it with systemd:
systemctl enable getty#tty1.service
Enable Serial Console on CentOS/RHEL 7
On the virtual machine, add ‘console=ttyS0‘ at the end of the kernel lines in the /boot/grub2/grub.cfg file:
grubby --update-kernel=ALL --args="console=ttyS0"
Note: Alternatively, you can edit the /etc/default/grub file, add console=ttyS0 to the GRUB_CMDLINE_LINUX variable and execute
grub2-mkconfig -o /boot/grub2/grub.cfg
GRUB_TERMINAL=serial
GRUB_SERIAL_COMMAND="serial –speed115200 –unit=0 –word=8 –parity=no –stop=1"
I had the same issue right after virt-install, then after trying to connect to the guest, too. I tried all the suggested solutions but none of them helped. Then I realized that I forgot to install KVM. A simple 'yum -y install kvm' resolved the issue.

Vagrant stuck in "Waiting for VM to Boot"

I want to preface this question by mentioning that I have indeed looked over most if not all vagrant "Waiting for VM to Boot" troubleshooting threads:
Things I've tried include:
vagrant failed to connect VM
https://superuser.com/questions/342473/vagrant-ssh-fails-with-virtualbox
https://github.com/mitchellh/vagrant/issues/410
http://vagrant.wikia.com/wiki/Usage
http://scotch.io/tutorials/get-vagrant-up-and-running-in-no-time
And more.
Here's how I setup my Vagrant:
Note: We are using Vagrant 1.2.2 since we do not at the moment have time to change configs to newer versions. I am also using VirtualBox 4.2.26.
My office has an /official/ folder which includes things such as Vagrantfile inside. Inside my Vagrantfile are these custom settings:
config.vm.box = "my_box"
config.ssh.private_key_path = "~/.ssh/github_rsa"
config.ssh.forward_agent = true
config.ssh.forward_x11 = true
config.ssh.max_tries = 300
config.vm.provision :shell, :inline => "/etc/init.d/networking restart"
I installed our custom box (called package.box) via vagrant box add my_box absolute_path/package.box which went without a hitch.
Running vagrant up, I would look at the "preview" of the VirtualBox, and it would simply be stuck at the login page. My Terminal would also only say: Waiting for VM to boot. This can take a few minutes. As far as I know, this is an SSH issue. Or my private key issues, though in my Vagrantfile I explicitly pointed to my private key location.
Interesting Notes:
Running dhclient within the VirtualBox GUI, it says command no found. Running sudo dhclient eth0 was one of the suggested fixes.
This fix: https://superuser.com/a/343775/298915 of "modify the /etc/rc.local file to include the line sh /etc/init.d/networking restart just before exit 0." did nothing to fix the issue.
Conclusion:
Having tried to re-install everything thinking I messed up a file, it did not seem to ameliorate the issue. I am unable to work with this issue. Could someone give me some insight?
So after around twelve hours of dejected troubleshooting, I was able to (finally) get the VM to boot.
Setup your private/public keys using the link provided. My box is a Debian Linux 3.2.0-4-amd64, so instead of /root/.ssh/id_rsa.pub, you have to use /home/vagrant/.ssh/id_rsa.pub (and the respective id_rsa path for the private key).
Note: make sure your files have the right permissions. Check using ls -l path, and change using chmod. Your machine may not have /home/vagrant/.ssh/authorized_keys, so generate that file with touch /home/vagrant/.ssh/authorized_keys.
Boot your VM using the VirtualBox GUI using (through either Vagrantfile boot-GUI command, or starting your VM using VirtualBox). Login using vagrant and vagrant when prompted.
Within the GUI, manually start dhclient using sudo dhclient eth0 -v. Why is it off by default? I have no idea. I found out that it was off when I tried to wget the private/public keys in the tutorial above, but was unable to.
Go to your local machine's command line and reload vagrant using vagrant reload. It should boot, and no longer hang at "Waiting for VM to Boot."
This worked for me. Though it may be different for other machines, for whatever reason Vagrant likes to break.
Suggestion: can this be saved as a script so we don't need to manually do this everytime?
EDIT: Update to the latest version of Vagrant, and you will never see this issue again. About time, huh?

Mercurial hg no suitable response from remote hg error

Trying setup mercurial SVM on my windows server (2008 RC) from last couple of hours. I am stuck on this error when I try to clone my repo from the client machine.
Error: no suitable response from remote hg
The server that I am running has SSH access (SSH running on port 1667). I also have a remote access to it.
I tried to clone using command as well as with the help of tortoisehg gui client. Commands I tried is:
hg clone ssh://myuser#myremoteip:1667//D:/Mercurial Projects/testproj E:\Mercurial\testproj-clone
hg clone --remotecmd D:/Program Files/TortoiseHg/hg --verbose -- ssh://myuser#myremoteip:1667//D:/Mercurial Projects/testproj E:\Mercurial\testproj-clone
but no success so far.
I also added following line in global setting at client side to give remote path of hg on server but no luck:
[ui]
remotecmd = D:/Program Files/TortoiseHg/hg
Please help me...
I had a similar problem and in my case it was that the computer had both TortoiseSVN and TortoiseHG installed. Both TortoiseHG and TortoiseSVN have a command TortoisePlink.exe that they use. However, due to the PATH, TortoiseHG was using TortoiseSVN's TortoisePlink.exe.
Uninstalling TortoiseSVN solved the problem for me.
You may open a "cmd" window and type:
where TortoisePlink.exe
to check what TortoisePlink.exe is used.
I think the problem was that my Python version was older than the one I needed. I was trying to set it up with Python 2.6. I followed another tutorial with Python 2.7 and latest Mercurial version (2.8.1)
Anyone with Windows Server 2008 and IIS 7+ should follow this tutorial.
I run into this problem after updating TortoiseHg. It turned out the location of TortoisePlink.exe has changed. I had it set explicitly to C:\Program Files\TortoiseHg\TortoisePlink.exe in mercurial.ini and I had to change it to C:\Program Files\TortoiseHg\lib\TortoisePlink.exe.