How do you set up Caddy Server to work within WSL? - authentication

Context
I'm trying to learn how to use secure cookies using a very simple nodejs server that returns basic html over localhost:3000. I was told I would need to set up a reverse proxy in order to get SSL working so you can best have your development environment mimic that of what production would be like. I realize that I can learn how secure cookies work over localhost without a reverse proxy, but I wanted the challenge and to learn how to get SSL set up in a development environment.
Setup
I personally prefer to develop in WSL (WSL 2, Ubuntu-20.04) so naturally I set up the node server in WSL along with Caddy Server with the following configuration provided by the levelup tutorials course I'm using to teach me about web authentication. I ran Caddy Server by running caddy run in the directory I had the following file.
{
local_certs
}
nodeauth.dev {
reverse_proxy 127.0.0.1:3000
}
The following were the startup logs for the Caddy Server
2021/07/01 00:53:18.253 INFO using adjacent Caddyfile
2021/07/01 00:53:18.256 WARN input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "Caddyfile", "line": 2}
2021/07/01 00:53:18.258 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2021/07/01 00:53:18.262 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc0003c5260"}
2021/07/01 00:53:18.281 INFO http server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443}
2021/07/01 00:53:18.281 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2021/07/01 00:53:18.450 INFO pki.ca.local root certificate is already trusted by system {"path": "storage:pki/authorities/local/root.crt"}
2021/07/01 00:53:18.451 INFO http enabling automatic TLS certificate management {"domains": ["nodeauth.dev"]}
2021/07/01 00:53:18.452 INFO tls cleaning storage unit {"description": "FileStorage:/home/rtclements/.local/share/caddy"}
2021/07/01 00:53:18.454 INFO tls finished cleaning storage units
2021/07/01 00:53:18.454 WARN tls stapling OCSP {"error": "no OCSP stapling for [nodeauth.dev]: no OCSP server specified in certificate"}
2021/07/01 00:53:18.456 INFO autosaved config (load with --resume flag) {"file": "/home/rtclements/.config/caddy/autosave.json"}
2021/07/01 00:53:18.456 INFO serving initial configuration
I also added 127.0.0.1 nodeauth.dev to the hosts file in WSL at /etc/hosts. Below is the resulting file.
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
127.0.1.1 MSI.localdomain MSI
192.168.99.100 docker
127.0.0.1 nodeauth.dev
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Problem
Interestingly enough, I was able to hit the node server via my browser by navigating to localhost:3000 (as expected), and was able to hit the Caddy Server by navigating to localhost:2019. The following was the log outputted by Caddy Server when I hit it.
2021/07/01 00:53:32.224 INFO admin.api received request {"method": "GET", "host": "localhost:2019", "uri": "/", "remote_addr": "127.0.0.1:34390", "headers": {"Accept":["*/*"],"User-Agent":["curl/7.68.0"]}}
I am not, however, able to see the html from my node server in my browser by navigating to nodeauth.dev. Neither am I seeing any output from running curl nodeauth.dev in my console in WSL, whereas I get the expected output when I run curl localhost:3000 also in WSL. Why is this?
I figured it had to do something with the hosts file on Windows not including this configuration. So I tried modifying that file to look like this, but I still couldn't get it to work.
# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
192.168.99.100 docker
127.0.0.1 nodeauth.dev
I tried running a powershell script that I barely understand that I found from here, but that didn't work either, and I was no longer able to access localhost:3000. I'm guessing it does some form of port forwarding. Code below.
If (-NOT ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator"))
{
$arguments = "& '" + $myinvocation.mycommand.definition + "'"
Start-Process powershell -Verb runAs -ArgumentList $arguments
Break
}
# create the firewall rule to let in 443/80
if( -not ( get-netfirewallrule -displayname web -ea 0 )) {
new-netfirewallrule -name web -displayname web -enabled true -profile any -action allow -localport 80,443 -protocol tcp
}
$remoteport = bash.exe -c "ifconfig eth0 | grep 'inet '"
$found = $remoteport -match '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}';
if( $found ){
$remoteport = $matches[0];
} else{
echo "The Script Exited, the ip address of WSL 2 cannot be found";
exit;
}
$ports=#(80,443);
iex "netsh interface portproxy reset";
for( $i = 0; $i -lt $ports.length; $i++ ){
$port = $ports[$i];
iex "netsh interface portproxy add v4tov4 listenport=$port connectport=$port connectaddress=$remoteport";
}
iex "netsh interface portproxy show v4tov4";
The only thing that worked was when I ran Caddy Server on Windows with the same configuration and changes to the hosts files as shown before. I'm not terribly sure what's going on here, but would any of y'all happen to know?

Related

SSH server and localhost

I tried to install a SSH server on WSL, it never worked. So i installed my SSH server on my laptop and i try to connect, it doesn't work either. But it works from my phone on 4G or everything expect my computer on local
I get this error everytime, either with WSL Debian ou Windows :
ssh: connect to host localhost port 22: Connection refused
Check first this OpenSSH Windows installation guide:
It includes a network configuration:
Allow incoming connections to SSH server in Windows Firewall:
When installed as an optional feature, the firewall rule “OpenSSH SSH Server (sshd)” should have been created automatically.
If not, proceed to create and enable the rule as follows.
Either run the following PowerShell command as the Administrator:
New-NetFirewallRule -Name sshd -DisplayName 'OpenSSH SSH Server' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 22 -Program "C:\System32\OpenSSH\sshd.exe"
Replace C:\System32\OpenSSH\sshd.exe with the actual path to the sshd.exe (C:\Program Files\OpenSSH\ssh.exe, had you followed the manual installation instructions above).
or go to Control Panel > System and Security > Windows Firewall1 > Advanced Settings > Inbound Rules and add a new rule for port 22.
Then you can check if at least your SSH daemon can receive anything.
The OP SRP adds in the discussion:
The problem turned out to be other machine with same IP address as the server."
I cheated and it works: I used a VPN.

Trying to get selinux to allow apache to run an executable that uses a port

I am trying to get apache to run a bash script which uses ffmpeg to take snapshots from a mp4 stream. I get an "Input/Output" error where ffmpeg is blocked from accessing port 80.
I've gotten apache to run ffmpeg, it just seems to get blocked on port access.
I assume its an selinux permission problem where ffmpeg needs special permissions to be able to access port 80 (or whatever port it is) when run by apache.
The script runs fine from command line, its just launching it remotely that dies.
Thanks for your help!
sudo semanage port -l | grep http_port
http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000
pegasus_http_port_t tcp 5988
ls -Z /usr/bin/ffmpeg
-rwxr-xr-x. root root system_u:object_r:bin_t:s0 /usr/bin/ffmpeg
From var/log/audit/audit.log:
type=AVC msg=audit(1502245154.609:23912): avc: denied { name_connect } for pid=12043 comm="ffmpeg" dest=80 scontext=system_u:system_r:httpd_sys_script_t:s0 tcontext=system_u:object_r:http_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1502245154.609:23912): arch=c000003e syscall=42 success=no exit=-13 a0=3 a1=1775f00 a2=10 a3=7ffd7a6af0d0 items=0 ppid=12041 pid=12043 auid=4294967295 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=4294967295 comm="ffmpeg" exe="/usr/bin/ffmpeg" subj=system_u:system_r:httpd_sys_script_t:s0 key=(null)
Running Red Hat Enterprise Linux 7.4
Solved using https://wiki.centos.org/HowTos/SELinux#head-faa96b3fdd922004cdb988c1989e56191c257c01
created policy specific to access requirements - Step 7 in the documentation

Firewalld seems to be blocking connecting to my CouchDB 2.0

I'm trying to set up a CouchDB 2.0 instance up on my CentOS 7 server.
I've got it installed and running as a systemd service and it responses with its friendly hello world message when I access it from the server using 127.0.0.1 or 0.0.0.0
$ curl 127.0.0.1:5984
{"couchdb":"Welcome","version":"2.0.0","vendor":{"name":"The Apache Software Foundation"}}
$ curl 0.0.0.0:5984
{"couchdb":"Welcome","version":"2.0.0","vendor":{"name":"The Apache Software Foundation"}}
in my local.ini file I've configed the bind_address to 0.0.0.0
[httpd]
bind_address = 0.0.0.0
My understanding was that if I had this bind address I could connect to port 5984 from any ip address open in my firewall
I'm using firewalld for my firewall and I've configured it to open port 5984
This config is confirmed by listing the configuration of the public zone:
$ sudo firewall-cmd --zone=public --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: couchdb2 dhcpv6-client http https ssh
ports: 443/tcp 5984/tcp
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
I've also created a service called couchdb2 at /etc/firewalld/services/couchdb2.xml with XML:
<service>
<short>couchdb2</short>
<description>CouchDB 2.0 Instance</description>
<port protocol="tcp" port="5984"/>
</service>
From what I know about firewalld I should be able to receive connection on 5984 now
but when I curl from my laptop my connection is refused:
$ curl my-server:5984 --verbose
* Rebuilt URL to: my-server:5984/
* Trying <my-ip>...
* connect to <my-ip> port 5984 failed: Connection refused
* Failed to connect to my-server port 5984: Connection refused
* Closing connection 0
When I connect to the couchdb instance locally via either 127.0.0.1 or 0.0.0.0 I can see the 200 response in my couchdb log:
$ sudo journalctl -u couchdb2
...
[notice] 2017-06-06T00:35:01.159244Z couchdb#localhost <0.3328.0> 222d655c69 0.0.0.0:5984 127.0.0.1 undefined GET / 200 ok 28
[notice] 2017-06-06T00:37:21.819298Z couchdb#localhost <0.5598.0> 2f8986d14b 127.0.0.1:5984 127.0.0.1 undefined GET / 200 ok 1
But when I curled from my laptop nothing shows up in the couchdb log for the Connection Refused error
This suggests to me that the problem may be the firewall and not CouchDB but I'm not sure about that.
Is Connection Refused always the firewall? Would I be getting some other error if this where the CouchDB instance having a problem?
To the best of my knowledge both CouchDB and firewalld are configured correctly, but its not working like I expected.
Any help would be appreciated, whether you know the problem or whether you can just help me discern if the problem is related to CouchDB or firewalld.

Postgres not allowing localhost but works with 127.0.0.1

Postgres not accepting connection if I say -h localhost but it works if I say -h 127.0.0.1
[root#5d9ca0effd7f opensips]# psql -U postgres -h localhost -W
Password for user postgres:
psql: FATAL: Ident authentication failed for user "postgres"
[root#5d9ca0effd7f opensips]# psql -U postgres -h 127.0.0.1 -W
Password for user postgres:
psql (8.4.20)
Type "help" for help.
postgres=#
My /var/lib/pgsql/data/pg_hba.conf
# TYPE DATABASE USER CIDR-ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
local all all ident
# IPv4 local connections:
host all all 127.0.0.1/32 trust
host all all 127.0.0.1/32 ident
# IPv6 local connections:
host all all ::1/128 ident
If I add following line then Postgres service failed to start:
host all all localhost ident
host all all localhost trust
Wwhat is wrong there?
Update
My /etc/hosts file:
[root#5d9ca0effd7f opensips]# cat /etc/hosts
172.17.0.2 5d9ca0effd7f
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
In pg_hba.conf, the first match counts. The manual:
The first record with a matching connection type, client address,
requested database, and user name is used to perform authentication.
There is no "fall-through" or "backup": if one record is chosen and
the authentication fails, subsequent records are not considered. If no
record matches, access is denied.
Note the reversed order:
host all all 127.0.0.1/32 trust
host all all 127.0.0.1/32 ident
But:
host all all localhost ident
host all all localhost trust
Remember to reload after saving changes to pg_hba.conf. (Restart is not necessary.) The manual:
The pg_hba.conf file is read on start-up and when the main server
process receives a SIGHUP signal. If you edit the file on an active
system, you will need to signal the postmaster (using pg_ctl reload,
calling the SQL function pg_reload_conf(), or using kill -HUP) to
make it re-read the file.
If you really "add" the lines like you wrote, there should not be any effect at all. But if you replace the lines, there is.
In the first case, you get trust authentication method, which is an open-door policy. The manual:
PostgreSQL assumes that anyone who can connect to the server is
authorized to access the database with whatever database user name
they specify (even superuser names)
But in the second case you get the ident authentication method, which has to be set up properly to work.
Plus, as Cas pointed out later, localhost covers both IPv4 and IPv6, while 127.0.0.1/32 only applies to IPv4.
If you are actually using the outdated version 8.4, go to the old manual for 8.4. You are aware that 8.4 has reached EOL in 2014 and is not supported any more? Consider upgrading to a current version.
In Postgres 9.1 or later you would rather use peer than ident.
More:
Run batch file with psql command without password
The Problem
Postgres will potentially use IPv6 when specifying -h localhost which given the above pg_hba.conf specifies ident, a password prompt will be returned.
However when -h 127.0.0.1 is specified, it forces Postgres to use IPv4, which is set to trust in above config and allows access without password.
The Answer
Thus the answer is to modify the IPv6 host line in pg_hba.conf to use trust:
# IPv6 local connections:
host all all ::1/128 trust
Remembering to restart the Postgres service after making config changes.

Yosemite localhost resolver and dnsmasq fails offline

Setup my local dev environment similar to this post and everything was working fine but recently I am unable to access my local dev domains when I am offline. When I am connected to the internet it works fine. I'm wondering if something changed with how resolver is used in Yosemite. It seems as if resolver rules are ignored if I'm offline.
dnsmasq.conf:
address=/.dev/127.0.0.1
listen-address=127.0.0.1
/etc/resolver/dev
nameserver 127.0.0.1
When online:
ping -c 1 mydomain.dev
PING mydomain.dev (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.038 ms
--- mydomain.dev ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.038/0.038/0.038/0.000 ms
scutil --dns
resolver #1
search domain[0] : nomadix.com
nameserver[0] : 203.124.230.12
nameserver[1] : 202.54.157.36
if_index : 4 (en0)
flags : Request A records
reach : Reachable
resolver #2
domain : dev
nameserver[0] : 127.0.0.1
flags : Request A records, Request AAAA records
reach : Reachable,Local Address
when offline:
ping -c 1 mydomain.dev
ping: cannot resolve mydomain.dev: Unknown host
scutil --dns
No DNS configuration available
OSX Yosemite + resolver + dnsmasq offline === resolved !!
when you're offline every interface on your computer, but 127.0.0.1, goes down.
so if you want to have a dns resolution your dns server have to listen to 127.0.0.1. In my case it's dnsmasq I choose because you don't have to be a sys admin to make it work, and it does !
following those simple steps I got it working:
1) brew install dnsmasq
2) cp /usr/local/opt/dnsmasq/dnsmasq.conf.example /usr/local/etc/dnsmasq.conf
if like me it's not properly installed in /usr/local/opt you should be able to read in the brew installation debug lines something like this :
make install PREFIX=/usr/local/Cellar/dnsmasq/2.72
in this case run the following command:
ln -s /usr/local/Cellar/dnsmasq/2.72 /usr/local/opt/dnsmasq
and then back to step 2
3) vi /usr/local/etc/dnsmasq.conf
and add your domains like this for exemple:
address=/foo.dev/192.168.56.101
where in that case every url ending with foo.dev (http://www.foo.dev, http://foo.dev, http://what.ever.you.want.foo.dev, etc...) will be resolved as 192.168.56.101 (this is the kind of ip you have using Virtualbox, 192.168.56.*)
4) sudo launchctl load /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist
5) try it before putting it into the resolver
nslookup foo.dev 127.0.0.1
and expect this :
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: foo.dev
Address: 192.168.56.101
6) mkdir -p /etc/resolver
vi /etc/resolver/dev
add those two lines :
nameserver 127.0.0.1
search_order 1
7) ping foo.dev or hint http://foo.dev or http://so.cool.foo.dev in your browser address bar and you're good to go !!!
8) Be happy !! You can work offline AGAIN !!!!
I've been checking this question for months hoping for an answer. I believe this will help when 10.10.4 drops: http://arstechnica.com/apple/2015/05/new-os-x-beta-dumps-discoveryd-restores-mdnsresponder-to-fix-dns-bugs/
Apple are replacing discoveryd with mDNSresponder (like it used to be)
The problem is when you are offline you should specify a resolver for the root domain '.':
When we search for www.google.com
There is a "." (root domain) added automatically at the end like: www.google.com.
So all you have to do is :
Set all your network interface dns servers to 127.0.0.1:
networksetup -setdnsservers Ethernet 127.0.0.1
networksetup -setdnsservers Wi-Fi 127.0.0.1
...
Create a file /etc/resolver/whatever:
nameserver 127.0.0.1
domain .
See this question for more details