I create a basic express app
const express = require('express');
const app = express();
app.get("*", (req, res) => {;
res.contentType('html');
res.send("HELLO FROM WSL");
});
const port = 80
app.listen(port);
Then I add following entrie in c:\windows\system32\drivers\etc\hosts
127.0.0.1 custom.local
Then I shutdown wsl wsl --shutdown and re-open to start my express app.
If I check hosts file from WSL (cat /etc/hosts), I got following result
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
127.0.1.1 LAPTOP-ZECKA.localdomain LAPTOP-ZECKA
127.0.0.1 custom.local
Then I go to http://custom.local trough chome in windows. But it's doesn't display my express app. (If i run express on windows instead of wsl it's work well).
What's wrong on my hosts file ?
Finally I found a solution on github: https://github.com/microsoft/WSL/issues/5728#issuecomment-917295590
Instead declare domain like that
127.0.0.1 custom.local
I do as follow:
127.0.0.1 custom.local
::1 custom.local localhost
Previously, I had my WSL in version 1 and everything worked fine. Then I decided to upgrade to WSL 2 and - similar to many others - I lost internet connection. Thankfully, I could easily bring it back by running:
sudo bash -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'
Apparently, it was just a DNS issue. I could live with that. However, then I restarted my WSL and the nameserver was set back to 172.31.208.1. So I decided to do exactly what was written in the comments in /etc/resolv.conf:
This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
[network]
generateResolvConf = false
And ran the following lines:
sudo bash -c 'echo "[network]" > /etc/wsl.conf'
sudo bash -c 'echo "generateResolvConf = false" >> /etc/wsl.conf'
I was proud that I resolved the issue so elegantly until I restarted my WSL again. Now, resolv.conf was in red and not accessible from Ubuntu. When I tried to access it from Windows, I saw just an empty file.
Expected behavior
After restarting WSL, resolv.conf keeps the user-defined values.
Actual behavior
After restarting WSL, resolv.conf is empty or not accessible at all.
Related
We have VirtualBox (using vagrant) env , by mistake made an entry in /etc/security/limits.conf [with out having a root shell open:( ] and now I am unable to ssh (the connections drops immediately).
Previously we had one such scenario (limits done by someone else) , was able to fix using vboxmanage guestcontrol copyto CLI and was able to overwrite limits.conf and then ssh was allowed, this time around the vboxmanage CLI also hangs
Tried to open the VM in GUI and went to console and tried few options , but could not get to single user mode.
Since you already tried vbox cli command and the commands hang, it means even virtualbox cannot access the system or get a shell to open.
In this case you will have to bring up a ubuntu VM and use the qemu-nbd module to fix this. The steps are given below.
Bring up a very simple ubuntu vm using hashicorp’s bionic64 on the same host machine by executing the following steps.
mkdir bionic
cd bionic
vagrant box add hashicorp/bionic64
vagrant init
Open the Vagrantfile and change the config.vm.box = "base" to config.vm.box = "hashicorp/bionic64"
Also mount the folder in the host where the .vdi file for the VM is located by adding the following to the Vagrant file by adding the following line(replace the file path with the correct one corresponding to your system. Here /nbd2 will be created on the ubuntu machine and will contain the files including the .vdi file.
config.vm.synced_folder "/home/topcat/VirtualBox\ VMs/your_vm", "/nbd2"
Now do vagrant up
Once the machine boots up
vagrant ssh #to ssh as vagrant
sudo su #to become root
apt-get update #This will refresh the apt cache
apt-get install qemu
modprobe nbd (to check if the module is loaded successfully. Will exit without any output if it is installed)
qemu-nbd -c /dev/nbd1 "/nbd2/box-disk001.vdi" - (Here change the path to whatever you gave in the config.vm.synced_folder property)
mkdir -p /mnt/vdi-boot
mount /dev/nbd1p1 /mnt/vdi-boot
cd /mnt/vdi-boot/etc/security (This folder will have all the files as it were in your VM)
touch limits.conf (if the file is already there, delete it)
chmod 644 limits.conf
chown root:root limits.conf
open the /mnt/vdi-boot/etc/security/nsswitch.conf file and check if the following three lines are present
passwd: files
shadow: files
group: files
umount /mnt/vdi-boot (unmounts the mounted path)
qemu-nbd -d /dev/nbd1 (disconnects from qemu-nbd)
Exit the VM and start the VM
Open another shell and try to ssh. It should go through fine this time.
Is there a way to use the VSCode Remote SSH extension to interact with a remote host that does not allow outbound internet connections?
Is it possible to download the vscode-server files from another system and copy to host?
I read this but I can't connect the server to internet.
When you connect to a host it executes a bash script that wgets or curls a tarball and extracts it in a directory in your home directory. Here's an offline workaround.
Attempt to connect, let it fail
On server, get the commit id
$ ls ~/.vscode-server/bin
553cfb2c2205db5f15f3ee8395bbd5cf066d357d
Download tarball replacing $COMMIT_ID with the the commit number from the previous step
For Stable Version
https://update.code.visualstudio.com/commit:$COMMIT_ID/server-linux-x64/stable
For Insider Version
https://update.code.visualstudio.com/commit:$COMMIT_ID/server-linux-x64/insider
Move tarball to ~/.vscode-server/bin/$COMMIT_ID/vscode-server-linux-x64.tar.gz
Extract tarball in this directory
$ cd ~/.vscode-server/bin/$COMMIT_ID
$ tar -xvzf vscode-server-linux-x64.tar.gz --strip-components 1
Connect again
You'll still need to install any extensions manually. There's a download button next to all the extensions in the marketplace. Once you have the .vsix file you can install them through the GUI with the Install from VSIX option in the extensions manager.
This is kind of a pain and hopefully they improve this process, but if you have a network-based home directory, you only have to do this once.
open vscode -> about
Version: 1.46.1
Commit: cd9ea6488829f560dc949a8b2fb789f3cdc05f5d
Date: 2020-06-17T21:17:14.222Z
Electron: 7.3.1
Chrome: 78.0.3904.130
Node.js: 12.8.1
V8: 7.8.279.23-electron.0
OS: Darwin x64 17.7.0
$COMMIT_ID = cd9ea6488829f560dc949a8b2fb789f3cdc05f5d
A new feature is being added to support offline install
However, you can now solve this issue by a new user setting in the Remote - SSH extension. If you enable the setting remote.SSH.allowLocalServerDownload, the extension will install the VS Code Server on the client first and then copy it over to the server via SCP.
Note: This is currently an experimental feature but will be turned on by default in the next release
https://code.visualstudio.com/blogs/2019/10/03/remote-ssh-tips-and-tricks
A a work around I have done the following:
Desktop ~/.ssh/config
...
Host *
RemoteForward 54321
...
Remote: ~/bin/wget in which ~/bin is added to PATH via .bashrc
#!/bin/bash
export LD_LIBRARY_PATH=$HOME/opt/lib/tsocks/
export TSOCKS_CONF_FILE=$HOME/opt/tsocks/tsocks.conf
$HOME/bin/tsocks /usr/bin/wget $#
Remote: ~/opt/tsocks/tsocks.conf
server = 127.0.0.1
server_port = 54321
server_type = 5
note tsocks binary has been scp-ed to ~/bin/tsocks and ~/opt/tsocks/ has been created with libtsocks.so which is normally stored in /usr/lib64/libtsocks.so
This is a work around that allows me to have wget functionality with out messing with anything outside my profile to get it to work (eg: no root required ... even though I have it).
Current Version of VS Code: 1.48.2
I just kill the wget process on the server end, and let the client download the archive and transfer it to the server end. That's quite easy as below.
make sure that you set in settings.json
"remote.SSH.allowLocalServerDownload": true,
execute the shell scrpits below.
# to find the <pid>
ps aux | grep wget | grep vscode-server
# kill the process
kill -9 <pid>
# then wait for the client downloading and transferring
# optional: If you want to know the progress, just
cd ~/.vscode-server/bin/<commit-id>/
watch -n 1 -d ls -rthl
on my github i'm creating a little fork from a debian minimal docker image. Its actually 5 packages which build up on previous:
debian-base-minimal
debian-base-standard
debian-base-security
debian-base-apache
debian-base-apache-php
On debian-base-apache i want to get a working env variable, which i can define later in docker-compose file. What should the env do?
Its should, if defined over docker-compose, write ServerName $SERVER_NAME at the end of /etc/apache2/apache2.conf to set a globally Server Name. If empty, no new line should be written.
But why its should write nothing when its empty? Cauz on build the Dockerfile to an image shouldnt include the SERVER_NAME.
I already tried something like:
echo "ServerName $SERVER_NAME" >> /etc/apache2/apache2.conf
on my 040-debian-base-apache file. But on build its wrote ServerName in, cauz i didnt defined a value and its using null. If i set a default in Dockerfile (ENV SERVER_NAME=127.0.0.1) its build the image with 127.0.0.1 and i cant change 127.0.0.1 via variable, cauz the variable already filled in with the value.
On ouput of the building with defined ENV SERVER_NAME=127.0.0.1 in Dockerfile (actually not in repo):
[...]
+ echo 'ServerName 127.0.0.1'
+ /etc/init.d/apache2 stop
Stopping Apache httpd web server: apache2.
ok.
+ /etc/init.d/apache2 start
Starting Apache httpd web server: apache2.
ok.
[...]
Its would be okay, if there stands default 127.0.0.1 cauz the apache can start. But i cant define it now in docker-compose.yml cauz its hardcoded 127.0.0.1 and not the output of a variable.
On ouput of the building with none defined ENV in Dockerfile (actually repo version):
[...]
+ echo 'ServerName '
+ /etc/init.d/apache2 stop
Stopping Apache httpd web server: apache2.
+ /etc/init.d/apache2 start
Starting Apache httpd web server: apache2 failed!
The apache2 configtest failed. ... (warning).
Output of config test was:
AH00526: Syntax error on line 228 of /etc/apache2/apache2.conf:
ServerName takes one argument, The hostname and port of the server
[...]
Can anybody help me to get this working? Would be nice to understand how it works.
Many thanks in advance.
As you've observed, every RUN command in the Dockerfile happens at docker build time, and in particular the contents of the file will be fixed based on what the environment variable was when you ran the build. You want it to change based on the runtime value of the variable, which means you need to write a script that runs at startup to do this.
A typical approach is to write an ENTRYPOINT script that does the first-time setup. The ENTRYPOINT gets passed the CMD (or whatever command got passed into docker run) as command-line arguments, so if it ends with exec "$#", the last thing it does is launch the "normal" command. You can use any ordinary shell script logic here, so I might write
#!/bin/sh
if [ -n "$SERVER_NAME" ]; then
echo "ServerName $SERVER_NAME" >> /etc/apache2/apache2.conf
fi
exec "$#"
Then you can provide this in your Dockerfile
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["apachectl", "-DFOREGROUND"]
(The chmod isn't necessary if you can guarantee the file has execute permissions on your non-Windows host. The ENTRYPOINT must use the JSON-ish form. If you have another image that builds on top of this, remember that the combined image gets only one ENTRYPOINT and one CMD; the very deep stack of images you suggest is a pretty unusual setup.)
I have a vagrant box with CentOS7 running under KVM/QEMU (libvirt) on my Fedora 29 host. vagrant up works fine. vagrant ssh fails with:
/usr/share/vagrant/gems/gems/vagrant-2.1.2/lib/vagrant/util/safe_exec.rb:39:
in `exec': : Permission denied - /home/username/bin/sshPermission denied - /home/username/bin/ssh ( (Errno::EACCESErrno::EACCES)
The doc says: Vagrant will attempt to use the local SSH client installed on the host machine. However, which ssh correctly results in: /usr/bin/ssh. So why vagrant doesn't use it ?
The directory! /home/username/bin/ssh was included in the PATH env when the box was created and vagrant seems to have stored this information somewhere. Removing the directory from PATH didn't help. Only when I rename or remove the directory vagrant ssh does work.
Can anyone tell me where vagrant stored the wrong info ?
Edit: The Vagrantfile is nearly empty, only config.vm.box contained...
Guess I found the reason - seems to be a bug or strange behavior of the vagrant version 2.1.2 that I use:
I still had directory /home/username/bin in the PATH env. Vagrant seems to list all entries in all directories included in PATH to look for ssh and finds subdirectory /home/username/bin/ssh not realizing that this is a directory ...
After removing /home/username/bin the command vagrant ssh works as expected. So unless vagrant is improved I have to permanently rename my /home/username/bin/ssh directory ...
In my Vagrant environment I have a guest Ubuntu Virtualbox with a LAMP with default settings.
I have my source code on the host machine in the same folder as my Vagrantfile. So on the guest Ubuntu I can access the files in the mounted /vagrant dir like this
/vagrant
/mysite
/index.php
/Vagrantfile
Now in my Apache config I add a line
Alias /mysite /vagrant/mysite
After reloading config and restarting apache I can go to localhost:8558/mysite/index.php and it works.
The problem is that when I reload Virtualbox with vagrant reload it starts Apache service before mounting the /vagrant folder. So Apache can't find the aliased dir and fails to start. i have to start it manually then
My question is - is there a way to delay Apache start so that it starts after the mounting?
Update: As a workaround I added script to the crontab that starts apache 30 seconds after the boot as described here. But I wonder if there is a better solution.
while upstart probably is a valid option, I had several issues using it with vagrant. I had to run several tasks that needed to be run as a privileged user, which I did not manage to get working with upstart.
Starting from version 1.6.0 (May 6, 2014), vagrant provides the option to run a specific provisioner every time, so also after booting a halted VM with vagrant up.
In your Vagrantfile, add:
# a file, eg after-boot.sh
config.vm.provision "shell", path: "after-boot.sh", run: "always"
# or just inline
config.vm.provision "shell", inline: "service apache2 restart", run: "always"
note the run: "always", this will force vagrant to run the provisioner always, obviously it works just as well with any other provisioning system like chef or puppet.
I would like to add a little to Zauberfisch's answer (Apache fails to start on Vagrant)
What needed to happen was this command needed to be run as a superuser AKA 'Sudo' so this was the command that was needed:
`config.vm.provision "shell", inline: "sudo service apache2 restart", run: "always"`
The reason why this didn't work for you without the sudo appears to be that Vagrant tries to run the command without /usr/sbin in PATH. For me, this worked just as well:
`config.vm.provision "shell", inline: "/usr/sbin/service apache2 restart", run: "always"`
If upstart is installed (as in Ubuntu), Vagrant emits "vagrant-mounted" event. See https://serverfault.com/a/568033/179583 to get the idea. In your script you can (re)start the Apache server.
Btw, I have a feeling that newer Apache versions just warn, but still start even if the doc root doesn't exist. The same with nginx.