I cannot find a way to use the running SSH server on GH Actions.
When I try to connect to 127.0.0.1 via ssh, there is a server, and responds, but
somehow ignores the configuration files in .ssh (or whatever the case may be).
Here is what script I used (the general setup does not seem to influence the results):
ssh-keygen -t ed25519 -f ~/.ssh/whatever -N ''
cat > ~/.ssh/config <<EOF
Host host.example
User $USER
HostName 127.0.0.1
IdentityFile ~/.ssh/whatever
EOF
echo -n 'from="127.0.0.1" ' | cat - ~/.ssh/whatever.pub > ~/.ssh/authorized_keys
ssh -o 'StrictHostKeyChecking no' host.example id
I am not satisfied with the results, since I cannot reproduce the log locally
(every machine I have behaves normally, i.e. allows to execute the command).
Generating public/private ed25519 key pair.
Created directory '/home/runner/.ssh'.
Your identification has been saved in /home/runner/.ssh/whatever.
Your public key has been saved in /home/runner/.ssh/whatever.pub.
The key fingerprint is:
SHA256:2ZCprVg5rZXp0IguQlCanUVTlCX7IFt2TPTnimdk0gM runner#fv-az60
The key's randomart image is:
+--[ED25519 256]--+
| ..+o+++ |
| = o ..= + |
|+ o . = E . . |
|. * # O o |
| . o B S * . |
|. . o B = o |
|. . o o o + |
| . . o |
| |
+----[SHA256]-----+
Warning: Permanently added '127.0.0.1' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
runner#127.0.0.1: Permission denied (publickey,password).
##[error]Process completed with exit code 255.
This is a permissions issue. By default, the permissions on the home folder in the container are too broad for the ssh daemon to accept (world/others read/write), so the server-side rejects your connection. Removing world/others read/write permission on your home directory fixes ths issue.
To fix, add the following to your script, just before the ssh call. This command removes the others read/write permission on the home directory:
chmod og-rw ~
Evidence:
name: ssh-example
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Run a multi-line script
run: |
ssh-keygen -t ed25519 -f ~/.ssh/whatever -N ''
cat > ~/.ssh/config <<EOF
Host host.example
User $USER
HostName 127.0.0.1
IdentityFile ~/.ssh/whatever
EOF
echo -n 'from="127.0.0.1" ' | cat - ~/.ssh/whatever.pub > ~/.ssh/authorized_keys
echo "Before fixing permissions on authorized_keys, notice home directory is world read/write"
ls -la ~/.ssh
ssh -o 'StrictHostKeyChecking no' host.example id || echo "ssh failed as expected... trying to fix permissions"
chmod og-rw ~
echo "After fixing permissions on home folder ~ ..."
ls -la ~/.ssh
ssh -o 'StrictHostKeyChecking no' host.example id
Output from the Github Action:
Generating public/private ed25519 key pair.
Created directory '/home/runner/.ssh'.
Your identification has been saved in /home/runner/.ssh/whatever.
Your public key has been saved in /home/runner/.ssh/whatever.pub.
The key fingerprint is:
SHA256:vKl342+LK4YP7Kj00Eqm1Jnst/7ED3Pzu/6TPOiHoUc runner#fv-az76
The key's randomart image is:
+--[ED25519 256]--+
| |
| |
| |
| . |
| S |
| o.o.. o E |
| .==. o*ooo = . |
|.=.+ +ooO.==.* |
|. oo=o==.=B#Boo |
+----[SHA256]-----+
Before fixing permissions on authorized_keys, notice home directory is world read/write
total 24
drwx------ 2 runner docker 4096 Feb 23 21:58 .
drwxrwxrwx 8 runner docker 4096 Feb 23 21:58 ..
-rw-r--r-- 1 runner docker 113 Feb 23 21:58 authorized_keys
-rw-r--r-- 1 runner docker 89 Feb 23 21:58 config
-rw------- 1 runner docker 411 Feb 23 21:58 whatever
-rw-r--r-- 1 runner docker 96 Feb 23 21:58 whatever.pub
Warning: Permanently added '127.0.0.1' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
runner#127.0.0.1: Permission denied (publickey,password).
ssh failed as expected... trying to fix permissions
After fixing permissions on home folder ~ ...
total 28
drwx------ 2 runner docker 4096 Feb 23 21:58 .
drwx--x--x 8 runner docker 4096 Feb 23 21:58 ..
-rw-r--r-- 1 runner docker 113 Feb 23 21:58 authorized_keys
-rw-r--r-- 1 runner docker 89 Feb 23 21:58 config
-rw-r--r-- 1 runner docker 222 Feb 23 21:58 known_hosts
-rw------- 1 runner docker 411 Feb 23 21:58 whatever
-rw-r--r-- 1 runner docker 96 Feb 23 21:58 whatever.pub
uid=1001(runner) gid=115(docker) groups=115(docker)
Permission Denied can be caused by multiple reasons.
this is code from github repo
if (options.control_master == SSHCTL_MASTER_ASK ||
options.control_master == SSHCTL_MASTER_AUTO_ASK) {
if (!ask_permission("Allow shared connection to %s? ", host)) {
debug2("%s: session refused by user", __func__);
reply_error(reply, MUX_S_PERMISSION_DENIED, rid,
"Permission denied");
this is caused by refused connection
probable causes.
sshd daemon i.e. ssh server is not running
user has no permission to ssh.
only root has permission to ssh.
check
# systemctl status sshd.service |grep Active
also check
# cat /etc/ssh/sshd_config
I don't think .ssh permissions are issue.
as they are created by user.
user permission mask is mostly 755. which is why not having permission for same user
to its own .ssh directory is highly unlikely.
Do let me know if problem persists.
Related
After generating ssh keys, I have tried to login but, I am getting a message Permission denied (publickey). This seems to be an error in access permissions.
After generating ssh keys for logging into cluster, when i give the command below from the terminal
$> ls -l ~/.ssh/id_*
I should get in return:
-rw------- 1 git git 751 Mar 1 20:16 /home/username/.ssh/id_rsa
-rw-r--r-- 1 git git 603 Mar 1 20:16 /home/username/.ssh/id_rsa.pub
Instead I am getting:
-rw-r--rw- 1 ubuntu ubuntu 3381 févr. 15 18:35 /home/ubuntu.ssh/id_rsa
-rw-r--rw- 1 ubuntu ubuntu 737 févr. 15 18:35 /home/ubuntu/.ssh/id_rsa.pub
Hence login fails with message:
Permission denied (publickey).
Using:
podman version 4.2.0
AlmaLinux 8.7
I've created an image based on redhat/ubi8 with the following Dockerfile:
FROM docker.io/redhat/ubi8
RUN dnf install -y gcc-c++ cmake python39 openssh git
RUN useradd -ms /bin/bash foobar -g users
USER foobar
WORKDIR /home/foobar/
RUN mkdir -p .ssh
$ docker build -t mount_test_image .
I run the image from a directory that contains a directory ssh, and I want to mount that directory to /home/foobar/.ssh with ownership of foobar.users
$ ls -l
-rw-r--r--. 1 host_user users 269 Dec 7 09:10 Dockerfile
drwxrwxr-x. 2 host_user users 18 Dec 2 10:41 ssh
docker run -it -d --rm --mount type=bind,src=ssh,target=/home/foobar/.ssh --name=mount_test mount_test_image
However when I enter the container via
docker exec -it mount_test '/bin/sh'
The home directory looks like this:
drwx------. 1 foobar users 18 Dec 7 17:10 .
drwxr-xr-x. 1 root root 21 Dec 7 17:10 ..
-rw-r--r--. 1 foobar users 18 Jun 20 11:31 .bash_logout
-rw-r--r--. 1 foobar users 141 Jun 20 11:31 .bash_profile
-rw-r--r--. 1 foobar users 376 Jun 20 11:31 .bashrc
drwxrwxr-x. 2 root root 18 Dec 2 18:41 .ssh
I obviously get a "permission denied" when trying to access that directory.
sh-4.4$ ls /home/foobar/.ssh
ls: cannot open directory '/home/foobar/.ssh': Permission denied
I tried changing the ownership of the directory on the host to match the uid of the container user, but then it just looks like this:
drwxrwxr-x. 2 nobody root 18 Dec 2 18:41 .ssh
My host user uid:gid is 501:100 and the container user is 1000:100. Right now I'm just trying to generate an ssh key to upload to bitbucket, but this seems like a simple feature a container should be have. All the tutorials and examples just stop after the --mount command instruction and say "there ya go!". What good is the mount point if you can't read/write it?
EDIT:
I tried on Archlinux using docker instead of podman and it works like one would expect with both -v and --mount. The owner of the mounted directory in the container matches the uid and gid of the host. Is this then a bug in podman or is it just done differently?
You are using a non-root user (foobar) in a rootless container. You must use --userns=keep-id for the container user to see the mounted volumes.
https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md#using-volumes
I'm trying to install HTTPD in docker, I wrote a dockerfile like this:
FROM centos
VOLUME /var/log/httpd
VOLUME /etc/httpd
VOLUME /var/www/html
# Update Yum Repostory
RUN yum clean all && \
yum makecache fast && \
yum -y update && \
yum -y install httpd
RUN yum clean all
EXPOSE 80
CMD /usr/sbin/httpd -D BACKGROUND && tail -f /var/log/httpd/access_log
it works if I run the image without host volumes, but failed if I use parameter:
--volume /data/httpd/var/www/html:/var/www/html --volume /data/httpd/var/log:/var/log --volume /data/httpd/etc:/etc/httpd
the error message is:
httpd: Could not open configuration file /etc/httpd/conf/httpd.conf: No such file or directory
I checked the mount point which is empty:
# ll /data/httpd/etc/
total 0
But if I don't use "volume" by default docker copys over files to a temp folder:
# ll /var/lib/docker/volumes/04f083887e503c6138a65b300a1b40602d227bb2bbb58c69b700f6ac753d1c34/_data
total 4
drwxr-xr-x. 2 root root 35 Nov 3 03:16 conf
drwxr-xr-x. 2 root root 78 Nov 3 03:16 conf.d
drwxr-xr-x. 2 root root 4096 Nov 3 03:16 conf.modules.d
lrwxrwxrwx. 1 root root 19 Nov 3 03:16 logs -> ../../var/log/httpd
lrwxrwxrwx. 1 root root 29 Nov 3 03:16 modules -> ../../usr/lib64/httpd/modules
lrwxrwxrwx. 1 root root 10 Nov 3 03:16 run -> /run/httpd
So I'm confused, why docker refused to copy them to the named location? and how to fix this problem?
This is a documented behavior indeed:
Volumes are initialized when a container is created. If the container’s
base image contains data at the specified mount point, that existing data
is copied into the new volume upon volume initialization. (Note that this
does not apply when mounting a host directory.)
i.e. when you mount the /etc/httpd volume --volume /data/httpd/etc:/etc/httpd, no data will be copied.
You can also see https://github.com/docker/docker/pull/9092 for a more detailed discussion on why it works this way (in case you are interested).
A usual workaround for this is to copy your initial data, to the volume folder (from within the container), inside your ENTRYPOINT or CMD script,
in case it is empty.
Note that your initial dataset must be kept outside the volume folder (e.g. as .tar file in /opt), for this to work, as the volume folder will be shadowed by the host folder mounted over it.
Given below is a sample Dockerfile and Script, which demonstrate the behavior:
Sample Dockerfile
FROM debian:stable
RUN mkdir -p /opt/test/; touch /opt/test/initial-data-file
VOLUME /opt/test
Sample script (try various volume mappings)
#Build image
>docker build -t volumetest .
Sending build context to Docker daemon 2.56 kB
Step 0 : FROM debian:stable
---> f7504c16316c
Step 1 : RUN mkdir -p /opt/test/; touch /opt/test/initial-data-file
---> Using cache
---> 1ea0475e1a18
Step 2 : VOLUME /opt/test
---> Using cache
---> d8d32d849b82
Successfully built d8d32d849b82
#Implicit Volume mapping (as defined in Dockerfile)
>docker run --rm=true volumetest ls -l /opt/test
total 0
-rw-r--r-- 1 root root 0 Nov 4 18:26 initial-data-file
#Explicit Volume mapping
> docker run --rm=true --volume /opt/test volumetest ls -l /opt/test/
total 0
-rw-r--r-- 1 root root 0 Nov 4 18:26 initial-data-file
#Explicitly Mounted Volume
>mkdir test
>docker run --rm=true --volume "$(pwd)/test/:/opt/test" volumetest ls -l /opt/test
total 0
And here is a simple entrypoint script, illustrating a possible workaround:
#!/bin/bash
VOLUME=/opt/test
DATA=/opt/data-volume.tar.gz
if [[ -n $(find "$VOLUME" -maxdepth 0 -empty) ]]
then
echo Preseeding VOLUME $VOLUME with data from $DATA...
tar -C "$VOLUME" -xvf "$DATA"
fi
"$#"
add the following to the Dockerfile
COPY data-volume.tar.gz entrypoint /opt/
ENTRYPOINT ["/opt/entrypoint"]
First run:
>docker run --rm=true --volume "$(pwd)/test/:/opt/test" volumetest ls -l /opt/test
Preseeding VOLUME /opt/test with data from /opt/data-volume.tar.gz...
preseeded-data
total 0
-rw-r--r-- 1 1001 users 0 Nov 4 18:43 preseeded-data
Subsequent runs:
>docker run --rm=true --volume "$(pwd)/test/:/opt/test" volumetest ls -l /opt/test
ls -l /opt/test
total 0
-rw-r--r-- 1 1001 users 0 Nov 4 18:43 preseeded-data
Note, that the volume folder will only be populated with data,
if it was completely empty before.
I'm trying to use sudo as NON ROOT user, so i have made some research on internet and i have found that the user (in this case creaz) needs to be added to the sudoers file.
So i did [root#vps1484 ~]$ visudo as root and i have added :
creaz ALL=(ALL) ALL
When i'm connected to creaz#creaz.pro via the ssh when i type sudo i'm getting :
[root#vps1484 ~]$ su creaz
creaz#creaz.pro [~]# sudo
sudo: effective uid is not 0, is sudo installed setuid root?
If i do :
creaz#creaz.pro [~]# ls -l `which sudo`
---x--x--x 1 root root 123832 Aug 13 2015 /usr/bin/sudo*
Did i miss something ?
Updated:
[root#vps1484 ~]$ stat /usr/bin/sudo
File: `/usr/bin/sudo'
Size: 123832 Blocks: 248 IO Block: 4096 regular file
Device: a7a0b651h/2812327505d Inode: 149272 Links: 1
Access: (4111/---s--x--x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2016-05-27 02:00:09.643651919 +1000
Modify: 2016-05-11 09:13:13.000000000 +1000
Change: 2016-05-27 01:11:02.486593149 +1000
[root#vps1484 ~]$
Sudo executable needs to have suid bit set:
$ ll `which sudo`
---s--x--x. 1 root root 139024 Nov 5 2015 /usr/bin/sudo
You can achieve that by running
chmod 4111 `which sudo`
under the root account.
I'm trying to get user directories to work with Lighttpd on Arch Linux. But after creating the public_html directory, placing an index.html file in it, setting permissions, configuring Lighttpd to use the user directory module, and restarting lighttpd, it still gives 404 for one specific user (yet another one works).
Here are my configuration files:
$ cat /etc/lighttpd/lighttpd.conf
# This is a minimal example config
# See /usr/share/doc/lighttpd
# and http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:ConfigurationOptions
server.port = 80
server.username = "http"
server.groupname = "http"
server.document-root = "/srv/http"
server.errorlog = "/var/log/lighttpd/error.log"
dir-listing.activate = "enable"
index-file.names = ( "index.html" )
mimetype.assign = (
".html" => "text/html",
".txt" => "text/plain",
".css" => "text/css",
".js" => "application/x-javascript",
".jpg" => "image/jpeg",
".jpeg" => "image/jpeg",
".gif" => "image/gif",
".png" => "image/png",
"" => "application/octet-stream"
)
include "conf.d/userdir.conf"
include "conf.d/cgi.conf"
$ cat /etc/lighttpd/conf.d/userdir.conf
server.modules += ( "mod_userdir" )
userdir.path = "public_html"
This is what things look like for user aardbei:
$ cat /home/aardbei/public_html/index.html
doot doot
$ ls -ld /home/aardbei/public_html
drwxrwxrwx 2 aardbei aardbei 4096 Mar 27 13:10 /home/aardbei/public_html
$ ls -ld /home/aardbei/public_html/index.html
-rwxrwxrwx 1 aardbei aardbei 37 Mar 27 13:11 /home/aardbei/public_html/index.html
But even after restarting the server with sudo systemctl restart lighttpd I still get 404 at URI /~aardbei/index.html and not what I should get: "doot doot"
However, this is what things look like for the user madeline:
$ cat /home/madeline/public_html/index.html
blah blah blah
$ ls -ld /home/madeline/public_html/
drwxrwxrwx 19 madeline madeline 4096 Mar 27 13:33 /home/madeline/public_html/
$ ls -ld /home/madeline/public_html/index.html
-rw-r--r-- 1 madeline madeline 15 Mar 27 13:33 /home/madeline/public_html/index.html
So the important parts are the same. And yet going to URI /~madeline/index.html does what it should do: it shows "blah blah blah"
Nothing looks relevant here, but here are groups for the two users:
$ groups madeline
wheel video audio wireshark madeline
$ groups aardbei
wheel aardbei
What is going on? Why doesn't the user directory for the user aardbei work in Lighttpd?
Following the instructions on the Arch Linux wiki for Apache worked: https://wiki.archlinux.org/index.php/Apache_HTTP_Server#User_directories
$ chmod o+x /home/aardbei
$ chmod o+x /home/aardbei/public_html
$ chmod -R o+r /home/aardbei/public_html
I'm still not sure I understand the permissions at play here, but it solves my problem.
The reason that the accepted answer works is due to the user that the web server is running as requiring access to the user's home directory in order to access their public_html.
Granting o+x allows other users to access a file and/or sub-directory within but not allow them to list the contents of a directory. Basically they can pass through it so long as they know what they're looking for, public_html, but they cannot get a list of the contents otherwise.
Example
Here's my home directory:
$ ls -dl /home/sam
drwx-----x. 3 sam sam 4096 Nov 3 11:08 /home/sam
$ ls -dl /home/sam/public_html
drwxr-xr-x. 2 sam users 4096 Nov 3 11:09 /home/sam/public_html
Now as the user of the web server, lighttpd, cannot list the contents of my home directory:
$ sudo -u lighttpd ls /home/sam
ls: cannot open directory /home/sam: Permission denied
But can see a specific directory if they happen to know its name:
$ ls -dl /home/sam/public_html
drwxr-xr-x. 2 sam users 4096 Nov 3 11:09 /home/sam/public_html
Here's another directory that the web server can see too:
$ sudo -u lighttpd ls -ld /home/sam/someotherdir
drwx------. 2 sam users 4096 Nov 3 11:22 /home/sam/someotherdir
And files within the public_html are visible as well:
$ ls -dl /home/sam/public_html/index.html
-rw-r--r--. 1 sam users 3 Nov 3 11:09 /home/sam/public_html/index.html
Normal permissions apply here, so if you don't want the web server to see something, make it readonly to your user & groups but not everyone else (others).