I'm trying to run my static web app using Windows Subsystem for Linux (2), but I can't figure out where on my computer I should store the git repository to be able to run it decently quickly. I have tried storing it on under /mnt/c/{workfolder}, but it takes several minutes to start up (using npm run start), and I have to rerun to see any changes. This is useless when I'm trying to work...
I have also tried to store it in /mnt/wsl/{workfolder}, and in that case it starts up quickly and I can see my changes without rerunning the app. However, it seems to disappears when I restart my computer.
Where should I store the git repository to be able to run the app quickly and see changes without rerunning? I'm assuming there's something I'm not understanding, help me get this it you know.
You'll want it somewhere on the ext4 partition of the WSL distribution. Typically, the best place is going to be under your WSL /home/<username> folder.
I would recommend:
mkdir ~/src
# or
mkdir ~/projects
# or something similar
Then create subdirectories for each project in that directory.
Why the others don't work:
/mnt/c is the Windows C: drive. That drive is mounted into WSL2 using the 9P network file system, and yes, it's (a) slow, and (b) does not support inotify, so apps cannot register for notifications of changes to files.
/mnt/wsl is a tmpfs mount. It's really there for holding things that need to be shared between all running WSL instances. The auto-generated resolv.conf that you see there is one of those things. You can also use it for copying a file from one WSL distribution to another -- Simply copy the file to /mnt/wsl, start another WSL distribution, and copy or move the file out.
But yes, all tmpfs mounts are ephemeral and will terminate when the last WSL2 distribution/instance terminates.
Related
I'm trying to find a good cross-platform way to deploy an npm project to a remote server over ssh (or another method). I'm specifically looking for something that copies over the files, while respecting the .gitignore (not copying files that are in .gitignore, and preserving files in .gitignore on the remote server, while pruning spare files.
Notably as a consequence of this requirement, this should neither copy node_modules nor clobber remote node_modules.
The idea is to get the source code to the server this way, and then execute commands over ssh to build it on the server, copy the dist into the appropriate location on the server, and run any other deploy steps.
I already have something that works fairly well. I set up a git repo on my server that I have a remote to locally, and I push my local changes to that remote. A post-recieve hook then takes effect and copies the source to where I need it, similar to what this describes.
This works pretty nicely, but it kind of falls apart when I want to deploy without fully committing everything, and it also feels somewhat fragile. I use a fairly complex local script to checkout a new branch, commit all working changes, and push it, but it fails on certain cases like having untracked files.
Pardon the lengthy context. tl;dr; I'm looking for other options to do this sort of deploy. It seems like rsync would be a natural candidate and I've looked into the npm rsync package, but its Windows support doesn't seem great, requiring cygwin. I've also considered copying manually with scp and leveraging a library to parse the .gitignore, but I'd like to preserve node_modules on the server (so it doesn't have to redownload everything), so I can't just overwrite the directory.
Any ideas?
I have two linux partitions on my laptop (one ubuntu and one garuda). Ubuntu was giving me problems so I installed Garuda to check it out. The Garuda partition filled up so I used KDE partition manager to shrink the ubuntu partition so I could expand the Garuda.
Then, Ubuntu wouldn't mount and would not boot as it said the fs was wrong size. I ran fsck on the partition and hit yes to pretty much everything. This included force rewriting blocks it said it couldn't reach and removing inodes, etc. Probably a mistake in hindsight.
Now, I got a external hard drive and cloned the Ubuntu partition using "sudo dd if=/dev/nvme0n1p5 of=/dev/sda1 conv=noerror,sync". The external hard drive mounted without problem but it does not have /home/ folder, only folders such as /etc/.
I don't think there's many files I cant get back from a git repo, but it would be nice to have access to the /home folder so I can grab everything, remove the ubuntu partition, and resize garuda.
Thanks in advance!
I figured it out. I kind of followed https://unix.stackexchange.com/questions/129322/file-missing-after-fsck but
I copied the partition to an external drive using dd. Then mounted the external drive (which just worked even though I could not mount the original ubuntu partition). Then I went into the lost+found folder on the partition and used "find" to search for a file I know I had in my home folder and it found that file. I am not able to access all my documents etc.
I'm running Ubuntu and have a remote CentOS system which stores (and has access to) various files and network locations. I have SSH access to the CentOS machine and want to be able to work locally on Ubuntu.
I'm trying to mirror a remote directory structure. The remote directory is structured:
/my_data/user/*
And I want to replicate this structure locally (a lot of scripts rely on absolute paths).
However, for reasons of speed, I want a certain subfolder, for example:
/my_data/user/sourcelibs/
To be stored locally on disk. I know the sourcelibs subfolder doesn't change much (but the rest might). So I can comfortably rsync it:
mkdir -p /my_data/user/sourcelibs/
rsync -r remote_user#remote_host:/my_data/user/sourcelibs/ /my_data/user/sourcelibs/
My question is, if I use sshfs to mount /my_data/user:
sudo sshfs -o allow_other,default_permissions, remote_user#remote_host:/my_data/user /my_data/user
Will it overwrite my existing files? Is there a way to have sshfs mount but exclude certain subfolders?
Yes, sshfs will overwrite existing files. I have almost the same use case and just tested this myself. BTW, you'll need to add -o nonempty to your sshfs command since the destination dir /my_data/user already exists.
What I found to work is make a copy of the remote directory excluding the large sub dirs. IDK if keeping 2 copies in sync on the remote machine is feasible for your use case? But if you'll mostly be updating on your local machine and rarely making changes remotely, that could work.
I am trying to set-up the testing of the repository using travis-ci.org and Docker. However, I couldn't find any manuals about what is the politics on memory usage.
To perform a set of tests (test.sh) I need a set of input files to run on, which are very big (up to 1 Gb, but average 500 Mb).
One idea is to wget directly in test.sh script, but for each test-run it would be not efficient to download the input file again and again.
The other idea is to create a separate dockerfile containing the test-files and mount it as a drive, but this would be not nice to push such a big dockerimage in the general register.
Is there a general prescription for such tests?
Have you considered using Travis File Cache?
You can write your test.sh script in a way so that it will only download a test file if it was not available on the local file system yet.
In your .travis.yml file, you specify which directories should be cached after a successful build. Travis will automatically restore that directory and files in it at the beginning of the next build. As your test.sh script will then notice the file exists already, it will simply skip the download and your build should be a little faster.
Note that how the Travis cache works is that it will create an archive file and put it on some cloud storage where it will need to download it later on as well. However, the assumption is that the network traffic will likely be inside that "cloud" and potentially in the same data center as well. This should still give you some benefits in terms of build time and lower use of resources in your own infrastructure.
I'm trying to run an application using vagrant. I have a directory where the codebase of app is placed and the .vagrant dir that is created there after its initializing. It looks so:
[app_codebase_root]/.vagrant/machines/default/virtualbox
There is a some very short manual about what to do (https://github.com/numenta/nupic/wiki/Running-Nupic-in-a-Virtual-Machine) and I stopped at the point 9 where is said:
9) Expose [app] codebase to the vagrant instance... If you have the
codebase checkout out, you can copy or move it into the current
directory...
So it's not clear for me what to copy and where? Does it mean some place within vagrant (if yes, then which exactly?) or some another place? Or I should just make a command vagrant ssh now?
From the Vagrant documentation:
By default, Vagrant will share your project directory (the directory with the Vagrantfile) to /vagrant.
So you should find your codebase root should under /vagrant on your guest.
This is always going to be a little confusing, so you need to separate the concepts of the host system and the VM.
Let's say the shared directory (the one with the Vagrantfile) is [something]/vagrant on your host system. Copy your app directory to [something]/vagrant/nupic (or run git clone in that directory) while still in Windows. Check using Windows Explorer that you see all the source files.
In a console window, cd to [something]/vagrant and run vagrant ssh.
You are now in the VM, so everything is now the VM's filesystem. Your code is now in /vagrant/nupic. Edit .bashrc as per the instructions to point to this directory, and run the build commands.