Launch external editor in virtual machine - virtual-machine

On my local machine, I have textmate installed, and in the terminal, I can do things like mate index.html, and this will launch textmate.
However now I'm using Vagrant and VBox. On a virtual machine, I can't use mate anymore. Is it possible to use an external editor like textmate on virtual machine files?

1.1+ Shared folder has been renamed to Synced Folder. By default the directory where Vagrantfile resides will be mounted as /vagrant via vboxsf. You can add more and change the default behaviour.
Please refer to the v2 docs => http://docs.vagrantup.com/v2/synced-folders/index.html
BTW: If your host is Linux, using sshfs is good alternative.

The easiest way is to mount a share and edit from your host... This is easily done with vagrant share :) You may want your files to live on the host permanently or on the guest depending of your situation.
You can check the documentation for this : http://docs-v1.vagrantup.com/v1/docs/config/vm/share_folder.html

Related

Issue with creating virtual environment on wsl2 and onedrive

I'm using Windows Subsystem for Linux (WSL2), and OneDrive mounted using rclone, on a Windows 10 machine.
When using WSL2 in the local directories, I can create a virtual environment for a project:
python -m venv myenv/
However, if I'm in a directory on OneDrive, and I run this command, I get an error:
Error: [Errno 5] Input/output error: 'lib' -> '/home/andrew/onedrive/myproject/venv/lib64'
If I look in the myproject directory, I can see that directory venv has been created. However it is incomplete, in that it only has 'lib' and 'include' subdirectories. When it is created properly (that is, in a directory not on OneDrive), it has 'lib, 'include', 'bin', 'lib64', 'share', and 'pyvenv.cfg'.
'lib64' is a symbolic link pointing to 'lib' in the normal installation. In the error message above, it seems that lib is actually pointing to lib64, so I suppose this is the input/output error?
Is there a way to get venv to work on OneDrive directories when mounted via rclone and using WSL2?
My hunch is that something in the way Rclone works is causing your issue. It purports to be "inspired by rsync", which means that it isn't really "mounting", but is "syncing" the cloud storage.
I did a quick install of rclone in a new WSL instance, and I'm seeing the same "Input/Output error" that you experienced.
I'm guessing that you would see the same problem even if under a "full" Linux installation. In other words, I doubt this has anything to do with the interaction between WSl and rclone. It's probably more about the interaction between Python's venv module and rclone. I don't know for sure, but I have my doubts that rclone is designed for "live" usage. It's more for being able to access (and write) files on cloud-storage providers from Linux.
The good news is that there's likely an alternative. Windows automatically makes your OneDrive folder available in %userprofile%\OneDrive. From WSL, you can access this with:
cd $(powershell.exe -c 'Write-Host -NoNewLine $env:userprofile' | xargs -0 wslpath)/OneDrive
That's usually /mnt/c/Users/<yourusername>/OneDrive.
From there, python3 -m venv myvenv/ worked correctly for me.
That said, I highly recommend against doing this in WSL2, as performance of NTFS files is abysmal. Best to use a WSL1 instance if you really need to do it.

Singularity sandbox file management

Fed up with struggling with lib install/dependencies problems, I'm starting working with Singularity.
Though, I'm not sure I understand precisely how it works regarding files management in the sandbox mode (not data, programs).
For example, I designed a very simple definition file that is just a "naked" Debian:
Bootstrap: library
From: debian
%post
apt-get update
I create a sandbox with this to add stuff:
sudo singularity build --sandbox Test/ naked_Debian.def
And I try to install a program. But what I don't understand is that I managed to do it, removed the sandbox directory but I think there are still files that were created during the sandbox life (in /dev, /run, /root, etc.). For example, the program that I cloned from git is now in /root of my local (independently of any container).
From what I understood, everything was in the container and should disappear if I remove the sandbox directory. Otherwise, I'm gonna leave a lot of mess with all the tests? And then I can't port the container from system to another.
If I create any new sandbox directory, the program is already there.
Cheers,
Mathieu
By default, singularity mounts $HOME to the container and uses that path as the working directory for singularity shell / exec. Since you're running the sandbox with sudo, /root is being mounted in and that's where any repos you cloned would end up if you didn't cd to a different directory. /tmp is also automatically mounted in, though that is unlikely to cause an issue since it's just temp files.
You have a few options to avoid files ending up in places you don't expect.
Disable automount of home: singularity shell --no-home ...
The default working directory is now / instead of $HOME and files are created directly in sandbox (as opposed to a mounted in directory)
If you want to get files out of the sandbox, you'll either need to copy to /tmp inside the container, and on the host OS from /tmp to the desired location
Set a different location to use as home: singularity shell --home $PWD ...
This mounts in and uses the current directory as $HOME instead of the user's $HOME on the host OS
Simpler to move files between host OS and container, but still creates files in the host OS
Don't mount system directories at all: singularity shell --contain --workdir /some/dir ...
Directories for /tmp and /var/tmp are created inside /some/dir instead of using /tmp and /var/tmp on the host. $HOME has the same path as the host and is used as the working directory, but it is empty and separate from the host OS
Complete separation from host OS, while still allowing some access between container and OS
Additional details on these options can be found in the documentation.

Exposing application's codebase to the vagrant instance

I'm trying to run an application using vagrant. I have a directory where the codebase of app is placed and the .vagrant dir that is created there after its initializing. It looks so:
[app_codebase_root]/.vagrant/machines/default/virtualbox
There is a some very short manual about what to do (https://github.com/numenta/nupic/wiki/Running-Nupic-in-a-Virtual-Machine) and I stopped at the point 9 where is said:
9) Expose [app] codebase to the vagrant instance... If you have the
codebase checkout out, you can copy or move it into the current
directory...
So it's not clear for me what to copy and where? Does it mean some place within vagrant (if yes, then which exactly?) or some another place? Or I should just make a command vagrant ssh now?
From the Vagrant documentation:
By default, Vagrant will share your project directory (the directory with the Vagrantfile) to /vagrant.
So you should find your codebase root should under /vagrant on your guest.
This is always going to be a little confusing, so you need to separate the concepts of the host system and the VM.
Let's say the shared directory (the one with the Vagrantfile) is [something]/vagrant on your host system. Copy your app directory to [something]/vagrant/nupic (or run git clone in that directory) while still in Windows. Check using Windows Explorer that you see all the source files.
In a console window, cd to [something]/vagrant and run vagrant ssh.
You are now in the VM, so everything is now the VM's filesystem. Your code is now in /vagrant/nupic. Edit .bashrc as per the instructions to point to this directory, and run the build commands.

Permissions issue with vagrant virtualbox

I have a Debian virtualbox set up with vagrant, in it i have the codebase for the project that i'm working on and i've set the folder which holds this codebase to be synced with the Host machine (which is Mac OS 10.8.4). I just learned that in order to change the permissions on any subfolders on my synced folder i must do this from the host machine, however my problem is that the application actually creates folders (and subfolders) and then expects to be able to write to them. Since the vm doesn't have the ability to chmod it's own folders these folders are not created with write access by default. How can this be achieved?
note: i've already tried using umask from both the host and the vm. it works on the host but since those changes are per terminal they don't propagate to the vm, using it on the vm doesn't work because the folders are managed by the host.
umask should be the way to go.
To make it persistent, you just need to add umask 027 (or whatever mask you want) to ~/.bash_profile for interactive login shell or ~/.bashrc for interactive non-login shell for the user who will be running the application, or just make it system wide by placing in /etc/profile.
NOTE: Ubuntu uses ~/.profile and does NOT have ~/.bash_profile.
Alternatively, set umask before running the application would probably help.

Redcar Install Direct Options

Is there a way to direct the install of redcar to a user defined location other than a user home directory?
I have a jruby install on a USB drive, E:\jruby-1.6.2. Redcar installs the gems to the E:\jruby sub directory but then installs the user files to ~/ on c:.
Is there a way to direct it to e:\fakehome. I want to keep all installation files on my USB drive.
I do not have a Redcar-specific solution, but here is a general solution that may work for you.
PROBLEM:
The user has an application that installs application data files to a fixed location, but the user wants the files in a different location (such as a removable drive or a standardized app data directory).
SOLUTION:
Use a Junction Point or Symlink to simulate the presence of the pre-configured directory.
STEPS:
install the application normally
locate the pre-configured directory that you wish to have relocated (e.g. c:\users\foouser\appdata\fooapp)
create an empty directory with the same name in your alternate desired location (e.g., e:\myusbdrive\appdata\fooapp)
terminate the application you just installed if it is still running
move all of the files out of the pre-configured directory and put them in the desired directory
delete the toplevel pre-configured directory
create a junction that points to the toplevel pre-configured directory you just deleted from the alternate desired location
restart the application and use it normally, making sure that it still behaves normally.
If all goes well you should be finished.
Here i a link to a junction creator (for older versions of Windows (TM))
http://technet.microsoft.com/en-us/sysinternals/bb896768
HTH
This response from Matthew Scharley directly answers how this can be done for redcar.
For the moment it is hard coded. Thankfully it is easy to change:
https://github.com/redcar/redcar/blob/master/lib/redcar.rb#L211