can't change the sshd's prot in CoreOS - ssh

System release: CoreOS 2135.5.0
Kernel: 4.19.50-coreos-r1
System install way: vmware
When I change the port in the sshd.service,it displays:
CoreOS-234 ssh # echo "Port 10000" >> /usr/share/ssh/sshd_config ;systemctl mask sshd.socket;systemctl enable sshd.service;systemctl restart sshd.service
-bash: /usr/share/ssh/sshd_config: Read-only file system

The file system that you are working in is currently in Read-only mode. Remounting the file system to read-write should resolve the issue. You will need to have root privilages:
$ mount -o remount,rw /
Occasionally the reason your file system will be running in read-only mode is due to Kernel issues, therefore there may be further problems with the system that will need to be debugged. Regarding the Kernel errors you may want to have a look at the following link: https://unix.stackexchange.com/questions/436483/is-remounting-from-read-only-to-read-write-potentially-dangerous?rq=1

In coreos /usr is designed to be a read-only file system, Remounting /usr is theoretically feasible, but is not officially recommended
You can refer to this
I use the following command to solve this problem
sudo sed -i '$a\Port=60022' /etc/ssh/sshd_config && \
sudo systemctl mask sshd.socket && \
sudo systemctl enable sshd.service && \
sudo systemctl start sshd.service

Related

Apache proxy + UNIX socket + SELINUX: How is it done?

I'm trying to get gunicorn running behind an Apache proxy via a UNIX socket in the file system. Long story short, it works with SELinux in non-enforcing mode but not when enforcing. I'm trying to fix that. Here's my socket file as created by gunicorn:
srwxrwxrwx. dh dh system_u:object_r:httpd_sys_content_t:s0 /var/www/wsgi/dham_wsgi.sock
Here's what audit2why has to say about this after a failed access via Apache:
type=AVC msg=audit(1641287516.397:870181): avc: denied { connectto } for pid=23897 comm="httpd" path="/var/www/wsgi/dham_wsgi.sock" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=unix_stream_socket permissive=0
Was caused by:
Missing type enforcement (TE) allow rule.
You can use audit2allow to generate a loadable module to allow this access.
Let's follow that hint, read some man pages and the Internet, and get to work:
$ sudo cat /var/log/audit/audit.log | audit2allow -m httpd_socket -l > httpd_socket.te
$ cat httpd_socket.te
module httpd_socket 1.0;
require {
type httpd_t;
type httpd_sys_content_t;
class sock_file write;
}
#============= httpd_t ==============
allow httpd_t httpd_sys_content_t:sock_file write;
$ checkmodule -M -m -o httpd_socket.mod httpd_socket.te
checkmodule: loading policy configuration from httpd_socket.te
checkmodule: policy configuration loaded
checkmodule: writing binary representation (version 19) to httpd_socket.mod
$ semodule_package -o httpd_socket.pp -m httpd_socket.mod
$ sudo semodule -i httpd_socket.pp
But it doesn't work, everything is as before. Restarting Apache makes no difference. What now?
My initital audit2allow seems not to have caught all problems because I used the '-l' flag (last policy reload). Using a more aggressive approach like below got me a few more entries in the generated module. After installing that, it worked.
sudo grep dham_wsgi /var/log/audit/audit.log | audit2allow -M httpd_socket

chown: invalid user: ‘nfsnobody’ in fedora 32 after install nfs

I am install nfs using this command in fedora 32:
sudo dnf install nfs-utils
and then I create a dir to export storage:
[dolphin#MiWiFi-R4CM-srv infrastructure]$ cat /etc/exports
/home/dolphin/data/k8s/monitoring/infrastructure/jenkins *(rw,no_root_squash)
now I could mount this dir with root user like this:
sudo mount -t nfs -o v3 192.168.31.2:/home/dolphin/data/k8s/monitoring/infrastructure/jenkins /mnt
now I want to make a step forward to make it it avaliable to any user from any ip(the client could mount nfs without using sudo), so I first try to chown of this folder:
chown 777 jenkins
and then I want to make this jenkins folder group and user to nfsnobody:
[dolphin#MiWiFi-R4CM-srv infrastructure]$ chown -R nfsnobody jenkins
chown: invalid user: ‘nfsnobody’
and I do not find any nfsnobody content from /etc/passwd. what should I do to fix invalid user: ‘nfsnobody’ problem? should nfs-util added it automatically?
Right now nobody used by default probably after RedHat/Centos versions 8
You can simply use
chown -R nobody jenkins
Or
Change it from /etc/idmapd.conf
[Mapping]
Nobody-User = nfsnobody
Nobody-Group = nfsnobody
To put the changes into effect restart the rpcidmapd service and remount the NFSv4 filesystem:
service rpcidmapd restart
mount -o remount /nfs/mnt/point
On Red Hat Enterprise Linux 6, if the above settings have been applied and UID/GID’s are matched on server and client and users are still being mapped to nobody:nobody then a clearing of the idmapd cache may be required.
# nfsidmap -c

gke cant disable Transparent Huge Pages... permission denied

I am trying to run a redis image in gke. It works except I get the dreaded "Transparent Huge Pages" warning:
WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
Redis is currently too slow to be useful... So I tied turning off THP:
sheena#gke-projectwaxd-cluster-default-pool-23593a74-wxrv ~ $ cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
sheena#gke-projectwaxd-cluster-default-pool-23593a74-wxrv ~ $ echo never > /sys/kernel/mm/transparent_hugepage/enabled
-bash: /sys/kernel/mm/transparent_hugepage/enabled: Permission denied
sheena#gke-projectwaxd-cluster-default-pool-23593a74-wxrv ~ $ sudo echo never > /sys/kernel/mm/transparent_hugepage/enabled
-bash: /sys/kernel/mm/transparent_hugepage/enabled: Permission denied
These permission errors are disconcerting. Redis wants THP off so it can work properly.
I did a little digging and found that google uses a special os image that makes /sys/ a read-only path. There's an alternative image that's based on Debian 7. It got me all excited but in the end I have exactly the same problem.
So how do I stop redis from being effected by THP on Google container engine?
It's not like I'm doing something unique here. Running databases in containers is pretty normal. And it's pretty normal for a database to malfunction when THP is enabled. So... what am I missing here?
Your command is slightly incorrect: echo runs as root but the redirection itself (>) runs as user so it can't write /sys/.
The following command works fine both on container-vm (debian based) and gci (chromeos based):
sudo sh -c 'echo never > /sys/kernel/mm/transparent_hugepage/enabled'
Persisting this setting on container-vm
Add this kernel command line parameter into /etc/default/grub (don't forget to run sudo update-grub and sudo reboot afterwards):
GRUB_CMDLINE_LINUX="... transparent_hugepage=never"
Persisting this setting on gci
First, using the cloud console copy the instance template that is in use by the node pool.
Second, under metadata change the value for userdata:
#cloud-config
write_files:
- path: /etc/systemd/system/hugepage.service
permissions: 0644
owner: root
content: |
[Unit]
Description=Disable THP
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo never > /sys/kernel/mm/transparent_hugepage/enabled"
[Install]
WantedBy=kubernetes.target
...
runcmd:
- ...
- systemctl enable hugepage.service
- systemctl start kubernetes.target
Third, change the instance template to the newly created one:
gcloud compute instance-groups managed set-instance-template \
gke-YOUCLUSTER-YOURPOOL-grp \
--template=YOURNEWTEMPLATENAME \
--zone=...
Forth, recreate the instace(s):
gcloud compute instance-groups managed recreate-instances \
gke-YOUCLUSTER-YOURPOOL-grp \
--zone=... \
--instances=...
The instances will loose all data and come up with THP disabled. All new instances will have THP disabled as well (in this node pool).

How do I start plack application on boot

Does anyone know how to start a plack application on boot.
The os is raspbian(raspberry pi).
I think i have run it as a normal user(pi). That's how i start it manually.
I have tried adding something like this to rc.local but without success
su pi -c 'cd /path/to/app && plackup -d -p 5000 -r -R ./lib,./t -a ./bin/app.psgi &'
This will in-turn be used by Apache and the app is written in dancer2 if it makes any difference.
On a raspberry pi I use systemd to create and start a service, in the file:
/etc/systemd/system/dancer.service
[Unit]
Description=NCI Starman Dancer App
After=syslog.target
[Service]
Type=forking
ExecStart=/usr/local/bin/starman --daemonize -l 127.0.0.1:3004 \
--user myuser --group myuser --workers 8 -D -E production \
--pid /var/run/dancer.pid -I/home/myuser/webservers/Dancer/lib \
--error-log=/home/myuser/logs/dancer_error.log \
/home/myuser/webservers/Dancer/bin/app.psgi
Restart=always
[Install]
WantedBy=multi-user.target
And then I enable this with systemctl enable dancer.service
Or start it manually with systemtctl start dancer.service
Instead of startman, you can of course use plackup.
The issue was that the perl 5 environment variables were not initialised (which are in .bashrc).
so the solution was to run the plackup command inside bash -i so that it reads .bashrc or set the PERL5LIB before invoking plackup
You may also want to use monit or supervisord to be sure your app is always run and will be restarted in case of kill by any reason, for example OOM

Vagrant fails to mount NFS shared folders because of corrupted /etc/exports. How do I fix that file?

I recently tried to install a VM with vagrant but "vagrant up" always failed with the error:
Mounting NFS shared folders failed. This is most often caused by the NFS
client software not being installed on the guest machine. Please verify
that the NFS client software is properly installed, and consult any resources
specific to the linux distro you're using for more information on how to
do this.
NFS client was properly installed on my machine so I looked for other causes of errors and found a blogpost explaining that my /etc/exports might be corrupted. I restored exportsbak (which contains only commented examples), hoping that vagrant would reconfigure that file properly... but it doesn't, and the error is still there.
How can I force vagrant to regenerate that file or fix it? Thanks.
Just delete the file.
sudo rm -f /etc/exports
The file will be recreated during the vagrant up process.
I was not able to get nfs running on my Ubuntu, because I used the vagrant packages from apt (V 1.2.2)
I installed the latest Vagrant Version (1.5) from here: http://www.vagrantup.com/downloads
and nfs worked.
Check the NSF server is not installed, you can do…
dpkg -l | grep nfs-kernel-server
If it is not installed, install the required packages…
apt-get install nfs-kernel-server
apt-get install nfs-common
service nfs-kernel-server restart
sudo service portmap restart
mkdir -p /var/exports
Then in Vagranfile add line under #shared folders...
config.vm.synced_folder "www", "/var/www", :nfs => { :mount_options => "dmode=755","fmode=755"] }
When vagrant is starting it will ask for root password, to run it without root password you can edit /etc/sudoers and add following lines…
Cmnd_Alias VAGRANT_EXPORTS_ADD = /usr/bin/tee -a /etc/exports
Cmnd_Alias VAGRANT_NFSD_CHECK = /etc/init.d/nfs-kernel-server status
Cmnd_Alias VAGRANT_NFSD_START = /etc/init.d/nfs-kernel-server start
Cmnd_Alias VAGRANT_NFSD_APPLY = /usr/sbin/exportfs -ar
Cmnd_Alias VAGRANT_EXPORTS_REMOVE = /bin/sed -r -e * d -ibak /etc/exports
%sudo ALL=(root) NOPASSWD: VAGRANT_EXPORTS_ADD, VAGRANT_NFSD_CHECK, VAGRANT_NFSD_START, VAGRANT_NFSD_APPLY, VAGRANT_EXPORTS_REMOVE
if your host is Windows, then you need to install a vagrant plugin Vagrant WinNFSd.
$ vagrant plugin install vagrant-winnfsd