Failed to grant selinux permission for ioctl - ioctl

I got following selinux permission issues:
[ 35.353551] type=1400 audit(38.680:14): avc: denied { ioctl } for
pid=266 comm="multilink" path="socket:[12798]" dev="sockfs" ino=12798
ioctlcmd=0x8946 scontext=u:r:multilink:s0 tcontext=u:r:multilink:s0
tclass=socket permissive=1
[ 35.353789] type=1400 audit(38.680:16): avc: denied { ioctl } for
pid=266 comm="multilink" path="socket:[12799]" dev="sockfs" ino=12799
ioctlcmd=0x8933 scontext=u:r:multilink:s0 tcontext=u:r:multilink:s0
tclass=packet_socket permissive=1
I tried to add following rules to fix this issue:
allowxperm multilink self:socket ioctl SIOCETHTOOL;
allowxperm multilink self:packet_socket ioctl SIOCGIFINDEX;
But, it didn't work, same issues occurred again.
Do I miss something ?

Adding another rule will fix this issue:
allow multilink self:socket { create ioctl };
allow multilink self:packet_socket { create ioctl };

Related

homebridge ssh connection wrong username

im trying to use a ssh plugin for the homebridge to turn off my truenas. if havve the following config
{
"accessory": "SSH",
"name": "Remote Command",
"command": "sudo shutdown -h now",
"sshConfig": {
"host": "truenas",
"username": "root",
"privateKey": "/home/pi/dev/SSH Shutdown/privatekey.ssh"
}
}
im getting the error
Error: Invalid username
at Client.connect (/usr/lib/node_modules/homebridge-ssh/node_modules/ssh2/lib/client.js:130:11)
at connect (/usr/lib/node_modules/homebridge-ssh/node_modules/ssh-exec/index.js:53:12)
at ReadFileContext.callback (/usr/lib/node_modules/homebridge-ssh/node_modules/ssh-exec/index.js:109:7)
at FSReqCallback.readFileAfterOpen [as oncomplete] (node:fs:314:13)
The SSH connection with that username and private key is working from the same pi. I tested that. I tried the same with another user and got the same error.
can someone help? Am I missing someothing obvisous?

Can't ping Terraform created droplets with Ansible

Using Terraform I have created 3 droplets on DigitalOcean. While doing it, in folder I have been writing SSH key and creating inventory.txt file.
Here is how it look in Terraform code:
resource "local_file" "servers_ipv4" {
content = join("\n", [
for idx, s in module.openvpn_do_infrastructure_module.servers_ipv4:
<<EOT
${var.droplet_names[idx]} ansible_host=${s} ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
EOT
])
filename = "${path.module}/ansible/inventory.txt"
}
resource "local_file" "ssh_keys" {
content = module.openvpn_do_infrastructure_module.ssh_keys
filename = "${path.module}/ansible/openvpn_do_ssh.key"
}
Then, I have ansible folder. After execution of the script and creating droplets in this folder I have 3 files. First file, is just ansible.cfg:
[defaults]
host_key_checking = false
inventory = ./inventory.txt
The other 2 are created by Terraform. It's SSH key - openvpn_do_ssh.key and inventory.txt:
certificate-authority-server ansible_host=123.123.123.121 ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
openvpn-server ansible_host=123.123.123.122 ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
nextcloud-server ansible_host=123.123.123.123 ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
And here is the problem. When I do ansible all -m ping, I get errors:
certificate-authority-server | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: root#123.123.123.121: Permission denied (publickey).",
"unreachable": true
}
nextcloud-server | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: root#123.123.123.122: Permission denied (publickey).",
"unreachable": true
}
openvpn-server | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: root#123.123.123.123: Permission denied (publickey).",
"unreachable": true
}
Also, I can connect to those droplets with SSH and everything is just fine. Even when I change permission to .key file, I still have the same error. I was trying to get more logs with -vvv flags, and here is the most interesting info I found:
ESTABLISH SSH CONNECTION FOR USER: root
...
<123.123.123.121> (255, b'', b"Warning: Permanently added '123.123.123.121' (ED25519) to the list of known hosts.\r\nroot#123.123.123.121: Permission denied (publickey).\r\n")
<123.123.123.121> (255, b'', b'root#123.123.123.121: Permission denied (publickey).\r\n')
I have solved this problem. This is what has helped me:
First of all, I have changed the extension of SSH key file from .key to .pem.
To ansible.cfg I have added next line:
[defaults]
host_key_checking = false
inventory = ./inventory.txt
inventory = ./inventory.txt
private_key_file = ./openvpn_do_ssh.pem
The last thing I have done, is adding read-only file_permission for SSH key.
resource "local_file" "ssh_keys" {
content = module.openvpn_do_infrastructure_module.ssh_keys
filename = "${path.module}/ansible/openvpn_do_ssh.pem"
content = module.openvpn_do_infrastructure_module.ssh_keys
filename = "${path.module}/ansible/openvpn_do_ssh.pem"
file_permission = "0400"
}
Hope it can help someone...

ssh connection refused when deploying meteor app from nitrous.io to Linode server using meteor up

See https://github.com/arunoda/meteor-up/issues/171
I am trying to deploy my meteor app from my nitrous box to a remote server in Linode.
I follow the instruction in meteor up and got
Invalid mup.json file: Server username does not exit
mup.json
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
// "username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 1024 }
}
]
So I uncomment the username: "roote line in mup.json and I did mup logs -n 300 and got the following error:
[123.456.78.90] ssh: connect to host 123.456.78.90 port 1024: Connection refused
I suspect I may did something wrong in setting up the SSH key. I can access my remote server without password after setting up my ssh key in ~/.ssh/authorized_keys.
The content of the authorized_keys looks like this:
ssh-rsa XXXXXXXXXX..XXXX== root#apne1.nitrousbox.com
Do you guys have any ideas of what went wrong?
Problem solved by uncommenting the username and changing the port to 22:
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
"username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 22 }
}
]

Vagrant / Chef-solo not working after running recipe rvm::vagrant

After adding recipe rvm::vagrant and running vagrant provision I got:
/usr/local/bin/chef-solo: line 23: /opt/vagrant_ruby/bin/chef-solo: No such file
or directory
Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
This issue should have been fixed:
https://github.com/fnichol/chef-rvm/issues/121
Even though I add the line:
'rvm' => {
'vagrant' => {
'system_chef_solo' => '/opt/vagrant_ruby/bin/chef-solo'
}
}
I am still getting the error. How can I recover from it?
You have to make sure that '/opt/vagrant_ruby/bin/chef-solo' is actual path of chef-solo. In my case it was /usr/bin/chef-solo. And this is part of my Vagrantfile that fixed it:
config.vm.provision :chef_solo do |chef|
chef.json.merge! rvm: {vagrant: {system_chef_solo: '/usr/bin/chef-solo'}}
end
This has been frustrating as answers are very time dependent. I have had this issue though and fixed it today by adding to my Vagrantfile. I had to vagrant destroy and vagrant up again so hopefully this doesn't break next time I vagrant provision but I have checked in the box and it looks like the path to chef-solo is correct. Found this answer out on Github at here
chef.json = {
rvm: {
vagrant: {
system_chef_solo: '/opt/chef/bin/chef-solo'
},
user_installs: [
{
user: 'vagrant',
default_ruby: '2.2.1',
rubies: ['2.2.1'],
global: '2.2.1'
}
]
},
... rest of Vagrantfile

In custom AMI sshd is not getting started

I created my own AMI & when I start my instance sshd is not getting started. What might be the problem?
Please find below the system log snippet
init: rcS main process (199) terminated with status 1
Entering non-interactive startup
NET: Registered protocol family 10
lo: Disabled Privacy Extensions
Bringing up loopback interface: OK
Bringing up interface eth0:
Determining IP information for eth0...type=1400 audit(1337940238.646:4): avc: denied { getattr } for pid=637 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
martian source 255.255.255.255 from 169.254.1.0, on dev eth0
ll header: ff:ff:ff:ff:ff:ff:fe:ff:ff:ff:ff:ff:08:00
type=1400 audit(1337940239.023:5): avc: denied { getattr } for pid=647 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
type=1400 audit(1337940239.515:6): avc: denied { getattr } for pid=674 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
type=1400 audit(1337940239.560:7): avc: denied { getattr } for pid=690 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
done.
OK
Starting auditd: OK
Starting system logger: OK
Starting system message bus: OK
Retrigger failed udev events OK
Starting sshd: FAILED
The problem was due to selinux. Once I disabled selinux during boot up by providing selinux=0 as argument in GRUB for kernel field, the machine booted with sshd service started and I'm able to connect to it.