Error using ssh-agent with systemd - ssh

I have been running a GO binary with no issues from my home/$user on a remote machine, however when I add the binary to systemd I get error creating SSH agent: "SSH agent requested but SSH_AUTH_SOCK not-specified" My unit file is as follows
[Unit]
Description=service
[Service]
Type=simple
Restart=always
RestartSec=5s
ExecStart=/home/$user/go/src/dir/binary
[Install]
WantedBy=multi-user.target

When you run the binary directly, you are running in your personal shell environment, where ssh-agent is likely running and SSH_AUTH_SOCK has been set for your environment.
When are running it via systemd, you are running it as root and the ssh-agent and related environment variables have not been set for your your user.
Understanding ssh-agent and ssh-add has more details.

Related

How do I resolve Invalid SSH Key Entry error when starting App with GCE

I'm trying to launch my app on Google Compute Engine, and I get the following error:
Sep 26 22:46:09 debian google_guest_agent[411]: ERROR non_windows_accounts.go:199 Invalid ssh key entry - unrecognized format: ssh-rsa AAAAB...
I'm having a hard time interpreting it. I have the following startup script:
# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REPOSITORY="github_sleepywakes_thunderroost"
# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &
# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git build-essential supervisor
# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v16.15.0/node-v16.15.0-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm
# Get the application source code from the Google Cloud Repository.
# git requires $HOME and it's not set during the startup script.
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/${PROJECTID}/r/${REPOSITORY} /opt/app/github_sleepywakes_thunderroost
# Install app dependencies
cd /opt/app/github_sleepywakes_thunderroost
npm install
# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app
# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/opt/app/github_sleepywakes_thunderroost
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
# Application should now be running under supervisor
My instance shows I have 2 public SSH keys. The second begins like this one in the error, but after about 12 characters it is different.
Any idea why this might be occurring?
Thanks in advance.
Once you deployed your VM instance, its a default setting that the SSH key isn't
configure yet, but you can also configure the SSH key upon deploying the VM instance.
To elaborate the answer of #JohnHanley, I tried to test in my environment.
Created a VM instance, verified the SSH configuration. As a default configuration there's no SSH key configured as I said earlier you can configure SSH key upon deploying the VM
Created a SSH key pair via CLI, you can use this link for instruction details
Navigate your VM instance, Turn off > EDIT > Security > Add Item > SSH key 1 - copy+paste generated SSH key pair > Save > Power ON VM instance
Then test the VM instance if accessible.
Documentation link How to Add SSH keys to project metadata.

Systemctl service not executed on boot

I created a short script to set an interface to down,
[Unit]
Description=Down
[Service]
Type=simple
ExecStart=/usr/bin/base -c 'ifconfig eth0 down'
I have done systemctl enable, and systemctl start works as expected to set the interface to down.
However, on rebooting it is not executed after logging in. Is there anything I missed?
The above is done on a RPi4 with the 64-bit version of Raspberian OS.
Could you perhaps be missing the [Install] property?
[Install]
WantedBy=multi-user.target

set umask for tomcat8 via tomcat.service

I am trying to set a custom umask for a tomcat 8 instance, tried to make it the good way by using the UMask directive in systemd tomcat unit as seen here without luck.
I'd like to set a 022 umask cause the company dev needs to access tomcat / application logs and they are not in the same group as the tomcat user....
the crazy thing is that the systemd doc says :
Controls the file mode creation mask. Takes an access mode in octal notation. See umask(2) for details. Defaults to 0022.
But the logs (application / tomcat) are set to 640 (not the expected 755) :
-rw-r----- 1 top top 21416 Feb 1 09:58 catalina.out
My service file :
# Systemd unit file for tomcat
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target
[...]
User=top
Group=top
UMask=0022
[Install]
WantedBy=multi-user.target
Any thoughts about this ?
Thanks
Try adding UMASK as Environment variable into tomcat's service file:
[Service]
...
Environment='UMASK=0022'
...
Default catalina.sh is checking for environment's $UMASK:
# Set UMASK unless it has been overridden
if [ -z "$UMASK" ]; then
UMASK="0027"
fi
umask $UMASK
(It seems to me, that UMask from systemd is not used by Tomcat, but I am not completely sure.)
I think you can achieve this with systemd by doing the following:
~]# mkdir -p /etc/systemd/system/tomcat.service.d
~]# echo -e "[Service]\nUMask=0022" >/etc/systemd/system/tomcat.service.d/custom-umask.conf
~]# systemctl daemon-reload
~]# systemctl restart tomcat
/etc/systemd/system/tomcat.service.d/umask-user.conf should overwrite the default values.
Source: https://access.redhat.com/solutions/2220161
P.S: A umask of 0022 would give a file 0644 permissions and a directory 0755
if using jsvc to start Tomcat as daemon process, then we need to set the -umask argument in jsvc command line

Error with rabbit-mq server

I am trying to setup OpenStack on Ubuntu 12.04 using devstack. Now, the error I am getting is:
Setting up rabbitmq-server (2.7.1-0ubuntu4) ...
Starting rabbitmq-server: FAILED - check /var/log/rabbitmq/startup_{log, _err}
rabbitmq-server.
invoke-rc.d: initscript rabbitmq-server, action "start" failed.
dpkg: error processing rabbitmq-server (--configure):
subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
Errors were encountered while processing:
rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
++ err_trap
++ local r=100
++ set +o xtrace
stack.sh failed
Any idea why am I getting this error?
I had this issue twice, when either hostname or ip address in the hosts file didn't match.
Therefore, check that you provide the correct ip address and hostname in the /etc/hosts file
Run sudo cat /etc/hostname to see your hostname
Output:
yoursite
Run sudo nano /etc/hosts
File contains:
127.0.0.1 yoursite
As you see from cat /etc/hostname, hostname is the same as in the /etc/hosts:
Run sudo rabbitmq-server start to start the rabbitmq-server
Try deleting the folder /var/lib/rabbitmq and re-running ./stack.sh
If that doesn't work either, run the following after stach.sh fails:
chown -R rabbitmq:rabbitmq /var/lib/rabbitmq
chown -R rabbitmq:rabbitmq /var/log/rabbitmq
service rabbitmq-server restart
and check the status of rabbitmq using "rabbitmqctl status"
Similar thing happen to me. Rabbit depends on being able to resolve a hostname, run this:
echo "127.0.0.1 $(hostname -s)" | sudo tee -a /etc/hosts
This way works for me.
First go to
sudo vim /etc/hosts
and set
127.0.0.1 <hostname>
then open firewall
sudo rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
For a clean environment, this will not happen. You must run devstack for several times, and one of them failed but you didn't get it cleaned.
run command pf -ef | grep rabbitmq, kill all rabbitmq processes. then it would be fine to run ./stack.sh
it is highly recommended to run ./unstack.sh && ./clean.sh before ./stack.sh
Just to be sure, take a look to your local network
ip add
If there's no lo network, then you should enable it:
ifconfig lo up
Then restart the server again and let's see if it works again now
systemctl start rabbitmq-server
I had the same problem though my /etc/hosts and DNS were OK. I suspect that SystemV init script was started too early when the network was not ready yet. I rewrote the startup script to systemd on CentOS 7.8 and it seems to work well now.
[Unit]
Description=RabbitMQ
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
RuntimeDirectory=rabbitmq
PrivateTmp=true
Restart=on-failure
RestartSec=10
WorkingDirectory=/opt/data/rabbitmq/
User=rabbitmq
Group=rabbitmq
ExecStart=/opt/app/rabbitmq/default/sbin/rabbitmq-server
ExecStop=/opt/app/rabbitmq/default/sbin/rabbitmqctl stop
ExecStop=/bin/sh -c "while ps -p $MAINPID >/dev/null 2>&1; do sleep 1; done"
StandardOutput=journal
StandardError=inherit
[Install]
WantedBy=multi-user.target

Changing vagrant ssh user creates permission erros

I'm trying to alter an Vagrant box I created for my office. Currently, like most boxes, running vagrant ssh logins me in as the vagrant user, but team members get frustrated having to use su - xxadmin to switch to our primary admin user.
In my Vagrantfile, I added: config.ssh.username = "xxadmin", but then I started receiving the common Vagrant error when running vagrant up:
[default] Configuring and enabling network interfaces...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
sed -e '/^#VAGRANT-BEGIN/,/^#VAGRANT-END/ d' /etc/network/interfaces > /tmp/vagrant-network-interfaces
Stdout from the command:
Stderr from the command:
sudo: no tty present and no askpass program specified
and when running vagrant halt:
[default] Attempting graceful shutdown of VM...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
shutdown -h now
Stdout from the command:
Stderr from the command:
sudo: no tty present and no askpass program specified
What's going on here? Why would simply changing the ssh user create these errors? How do i find a solution forward?
Specs:
OS X Mavericks (host)
Vagrant 1.3.5
Virtualbox 4.3.2
Debian 7 Wheezy (vm client)
In your box, you need to modify your sudoers file by running visudo and adding the following:
Defaults !requiretty
I kept running into this error until I made sure that my user's NOPASSWD sudoers entry was not being squashed.