gsettings changes are not working over ssh - ssh

I am trying to set the idle timeout for Ubuntu 14.04 using gsettings from ssh.
The commands I am using are like this
dbus-launch gsettings set org.gnome.desktop.session idle-delay 600
dbus-launch gsettings set org.gnome.desktop.screensaver lock-delay 0
dbus-launch gsettings set org.gnome.desktop.screensaver lock-enabled true
dbus-launch gsettings set org.gnome.desktop.screensaver idle-activation-enabled true
After the commands are executed with various timeout periods the changes are taking place, but those timeout changes are getting lost after a reboot or logout.
Is this possible to make the timeout change persistent on reboot/logout.

Basically, when you are launching a new dbus instance with dbus-launch, you are saving the configurations to the wrong location by kicking off a new dbus. While adding dbus-launch to the beginning of the gsettings invokation will remove any error messages, you will not save changes.
There exists for the target user an existing dbus process, and via ssh your terminal doesn't receive the correct environment variables with which to address it.
The correct way to edit gsettings via ssh is to first identify the DBUS_SESSION_BUS_ADDRESS of the existing dbus process and set it as an environment variable. Thus:
PID=$(pgrep gnome-session)
export DBUS_SESSION_BUS_ADDRESS=$(grep -z DBUS_SESSION_BUS_ADDRESS /proc/$PID/environ)
# And now:
gsettings set org.gnome.desktop.session idle-delay 600

On Ubuntu 18.04 you have to set not only DBUS_SESSION_BUS_ADDRESS, but also XDG_RUNTIME_DIR. You can do so with this command (replace 121 with UID and gdm with user):
su gdm -s /bin/bash -c 'XDG_RUNTIME_DIR=/run/user/121 DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/121/bus gsettings get org.gnome.desktop.session idle-delay'

Related

One liner ssh different enviroment variables than normal ssh

I am using AWS Beanstalk, in case it may be relevant to the question.
The issue that I have is that when I do from my local terminal:
ssh mozart-api printenv
I missing most of the enviroment variables, instead if I do:
ssh mozart-api
..wait to open..
printenv
I get all enviroment varibles as I was expecting.
At first I thought it could be an ssh configuration in server but can't find anything strange.
if I do:
ssh mozart-api "export hello=123 && echo $hello"
then it outputs 123, which means that variables can be set and queried, however I just cannot get the existing variables from the server.
This is causing an issue because I am preparing a script that will run a command in ssh on this server, but because the variables are not loaded the project fails to open the database.
I tried reimporting them in one liner:
ssh mozart-api "sudo chmod +777 /etc/profile.d/sh.local && (/opt/elasticbeanstalk/bin/get-config environment | jq -r 'keys[] as \$k | \"echo export \(\$k)=\(.[\$k])\"') > /etc/profile.d/sh.local && printenv"
But still can't see the new added variables.
ssh mozart-api executes a login shell, which probably sources one or more files that define your environment variables.
ssh mozart-api printenv executes printenv instead of a login shell, so the only variables you see are the ones you inherit from the parent process, not any of the variables defined in your shell configuration files.

Can't run powerline-config during startup (in .tmux.conf)

When I start tmux, I get a failure when trying to configure powerline. I have set an environment environment variable with this:
export POWERLINE_CONFIG_COMMAND=`which powerline-config`
My ~/.tmux.conf contains the following:
if-shell "POWERLINE_CONFIG_COMMAND" \
run-shell "$POWERLINE_CONFIG_COMMAND tmux setup"
The error I get is:
unknown command: /path/to/powerline-config
I can run the config command manually after tmux starts with this:
$POWERLINE_CONFIG_COMMAND tmux setup
I don't understand why tmux can't run the command during the startup when it can run just fine afterwards.
I don't understand how you get that error. You should not get any message, and nothing should work.
if-shell "POWERLINE_CONFIG_COMMAND" \
run-shell "$POWERLINE_CONFIG_COMMAND tmux setup"
will fail, because POWERLINE_CONFIG_COMMAND is not a command. Your if-shell should have a $ in front of POWERLINE_CONFIG_COMMAND.
Let's assume that was a typo, and it's correct in your actual .conf. Then, the problem is that run-shell runs against tmux, the way it'd run if you typed <prefix>: in your tmux session.
tmux $POWERLINE_CONFIG_COMMAND tmux setup is not a valid command.
You could instead do
run-shell 'send-keys "$POWERLINE_CONFIG_COMMAND tmux setup" Enter'
If you wanted it run in a single pane.

Openshift rhc ssh IO error after env set JAVA_OPTS_EXTcommand

I've run into an issue with openshift - after setting the environment variable over rhc env set JAVA_OPTS_EXT=" -D spring.profile.active=production" my ssh access broke down giving me weird access rights error. Some ideas here?
I don't know for what reason after setting a different value onto the JAVA_OPTS_EXT it locked me out of permissions. It was sufficient to unset the variable and set it again to a desired value. Everything started to work smoothly again afterwards.
A command to unset the environment variable: "rhc env unset {VARIABLE1} -a {APP_NAME}"
A command to set the environment variable: "rhc env set {VARIABLE1}={VALUE1} -a {APP_NAME}"
For further info about the manipulation of environmnet variables refer to https://developers.openshift.com/managing-your-applications/environment-variables.html

How to shorten an inittab process entry, a.k.a., where to put environment variables that will be seen by init?

I am setting up a Debian Etch server to host ruby and php applications with nginx. I have successfully configured inittab to start the php-cgi process on boot with the respawn action. After serving 1000 requests, the php-cgi worker processes die and are respawned by init. The inittab record looks like this:
50:23:respawn:/usr/local/bin/spawn-fcgi -n -a 127.0.0.1 -p 8000 -C 3 -u someuser -- /usr/bin/php-cgi
I initially wrote the process entry (everything after the 3rd colon) in a separate script (simply because it was long) and put that script name in the inittab record, but because the script would run its single line and die, the syslog was filled with errors like this:
May 7 20:20:50 sb init: Id "50" respawning too fast: disabled for 5 minutes
Thus, I got rid of the script file and just put the whole line in the inittab. Henceforth, no errors show up in the syslog.
Now I'm attempting the same with thin to serve a rails application. I can successfully start the thin server by running this command:
sudo thin -a 127.0.0.1 -e production -l /var/log/thin/thin.log -P /var/run/thin/thin.pid -c /path/to/rails/app -p 8010 -u someuser -g somegroup -s 2 -d start
It works apparently exactly the same whether I use the -d (daemonize) flag or not. Command line control comes immediately back (the processes have been daemonized) either way. If I put that whole command (minus the sudo and with absolute paths) into inittab, init complains (in syslog) that the process entry is too long, so I put the options into an exported environment variable in /etc/profile. Now I can successfully start the server with:
sudo thin $THIN_OPTIONS start
But when I put this in an inittab record with the respawn action
51:23:respawn:/usr/local/bin/thin $THIN_OPTIONS start
the logs clearly indicate that the environment variable is not visible to init; it's as though the command were simply "thin start."
How can I shorten the inittab process entry? Is there another file than /etc/profile where I could set the THIN_OPTIONS environment variable? My earlier experience with php-cgi tells me I can't just put the whole command in a separate script.
And why don't you call a wrapper who start thin whith your options?
start_thin.sh:
#!/bin/bash
/usr/local/bin/thin -a 127.0.0.1 -e production -l /var/log/thin/thin.log -P /var/run/thin/thin.pid -c /path/to/rails/app -p 8010 -u someuser -g somegroup -s 2 -d start
and then:
51:23:respawn:/usr/local/bin/start_thin
init.d script
Use a script in
/etc/rc.d/init.d
and set the runlevel
Here are some examples with thin, ruby, apache
http://articles.slicehost.com/2009/4/17/centos-apache-rails-and-thin
http://blog.fiveruns.com/2008/9/24/rails-automation-at-slicehost
http://elwoodicious.com/2008/07/15/nginx-haproxy-thin-fastcgi-php5-load-balanced-rails-with-php-support/
Which provide example initscripts to use.
edit:
Asker pointed out this will not allow respawning. I suggested forking in the init script and disowning the process so init doesn't hang (it might fork() the script itself, will check). And then creating an infinite loop that waits on the server process to die and restarts it.
edit2:
It seems init will fork the script. Just a loop should do it.

How do I execute a command every time after ssh'ing from one machine to another?

How do I execute a command every time after ssh'ing from one machine to another?
e.g
ssh mymachine
stty erase ^H
I'd rather just have "stty erase ^H" execute every time after my ssh connection completes.
This command can't simply go into my .zshrc file. i.e. for local sessions, I can't run the command (it screws up my keybindings). But I need it run for my remote sessions.
Put the commands in ~/.ssh/rc
You can put something like this into your shell's startup file:
if [ -n "$SSH_CONNECTION" ]
then
stty erase ^H
end
The -n test will determine if SSH_CONNECTION is set which happens only when logged in via SSH.
If you're logging into a *nix box with a shell, why not put it in your shell startup?
.bashrc or .profile in most cases.
Assuming a linux target, put it in your .profile
Try adding the command below the end of your ~/.bashrc. It should be exited upon logoff. Do you want this command only executed when logging off a ssh session? What about local sessions, etc?
trap 'stty erase ^H; exit 0' 0
You probably could setup a .logout file from /etc/profile using this same pattern as well.
An answer for us, screen/byobu users:
The geocar's solution will not work as screen will complain that "Must be connected to a terminal.". (This is probably caused by the fact that .ssh/rc is processed before shell is started. See LOGIN PROCESS section from man 8 sshd).
Robert's solution is better here but since screen and byobu open it's own bash instance, we need to avoid infinite recursion. So here is adjusted byobu-friendly version:
## RUN BYOBU IF SSH'D ##
## '''''''''''''''''' ##
# (but only if this is a login shell)
if shopt -q login_shell
then
if [ -n "$SSH_CONNECTION" ]
then
byobu
exit
fi
fi
Note that I also added exit after byobu, since IMO if you use byobu in the first place, you normally don't want to do anything outside of it.