sudo and permission issues getting me crazy - permissions

I have defined a bash script to prepare gpio27 for setting it to 0 or 1 through additional scripts on a RaspberryPi Zero 2 with Buster installed.
The script is the following one:
#!/bin/bash
echo "27" > /sys/class/gpio/export
echo "out" > /sys/class/gpio/gpio27/direction
1- If I run this script as user "pi", I get a permission denied error (NOK):
/home/pi/bin/prep27: line 3: /sys/class/gpio/gpio27/direction: Permision denied
2- If I run the conflictive line ‘echo "out" > /sys/class/gpio/gpio27/direction’ as user pi (no sudo), I get no error (OK):
echo "out" > /sys/class/gpio/gpio27/direction
3- if I replace in the script the third line by ‘sudo echo "out" > /sys/class/gpio/gpio27/direction’ and I execute the script as pi, I also get a permission denied error (NOK):
#!/bin/bash
echo "27" > /sys/class/gpio/export
sudo echo "out" > /sys/class/gpio/gpio27/direction
4- if i sudo execute the script as user pi, I get no error (OK)
sudo /home/pi/bin/prep27
Could you help me understand these permission issues with the script and its contents ?
Thanks very much

I believe the problem you're facing has to do with the way the Raspi is allocating permissions after you've created the pin. That works using a mechanism called udev that changes the permissions on the new "file" /sys/class/gpio/gpio27/direction after it is created by the previous export line. The problem now is, that this requires a bit of time to work correctly.
To work around this, you have to add a delay after the echo "27" > /sys/class/gpio/export line (1 second or so should do). Alternatively, you can repeat the direction line until it works.

Related

Gitlab CI job fails even if the script/command is successful

I have a CI stage with the following command, which has to be executed remotely and checks if the mentioned file exists, if yes it creates a backup for it.
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_1.txt" ] && cp -v "${PATH}/test_1.txt" ${PATH}/test_1_$CI_COMMIT_TIMESTAMP.txt)'
The issue is, this job always fails whether the file exists or not with the following output:
ssh user#hostname '([ -f /etc/file/path/test_1.txt ] && cp -v /etc/file/path/test_1.txt /etc/file/path/test_1_$CI_COMMIT_TIMESTAMP.txt)'
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Running the same command manually, just works fine. So,
How can I make sure that this job succeeds as long as command logic is executed successfully and only fail incase there are some genuine failures?
There is no way for the job to know if the command you ran remotely worked or not. It can only know if the ssh instruction worked or not. You can force it to always succeed by appending || true to any instruction.
However, if you want to see and save the output of your remote instruction, you can do something like this:
ssh user#host command 2>&1 | tee ssh-session.log

ssh to remote server with arguments to run scripts

I have lots of data that needs to be processed, and have access to 3 separate remote servers. The logic is to split up the data crunching among the 3 different servers instead of running on a single one. Note, that all three remote servers are able to point to a single directory, which is where I have the master scripts to process all of the data. The problem I am have is carrying over my arguments when I call different bash scripts.
For example, I have the master script which looks something like:
processing stuff
more stuff
# call the first script
$scriptdir/step1.csh $date $time $name
Within step1.csh, if I have something very simple where I am able to connect to one of the remote servers and output the hostname to a text file, such as:
#!/bin/bash
ssh name#hostname bash -c '
echo `hostname` > host.txt
I get the desired outcome, where 'host.txt' will be the hostname of the desired connected hostname. However, If step1.csh looks like:
#!/bin/bash
mydate=$1
mytime=$2
myname=$3
ssh name#hostname bash '
echo `hostname` > host.txt
echo ${mydate} > host.txt
I get the error saying that 'mydate: undefined variable'
Furthermore, If I do something along the lines of:
#!/bin/bash
mydate=$1
mytime=$2
myname=$3
ssh name#hostname "python /path/to/somewhere/to/run/${mydate}/and/${mytime}
It still runs on the local, and not remote server. What am I missing here?
So the first part:
#!/bin/bash
mydate=$1
mytime=$2
myname=$3
ssh name#hostname bash '
echo `hostname` > host.txt
echo ${mydate} > host.txt
The solution is:
#!/bin/bash
mydate=$1
mytime=$2
myname=$3
ssh -T name#hostname << EOF
echo `hostname` > host.txt
echo ${mydate} > host.txt
EOF
However, I am still having issues as in where I try to run a python script on the remote server; it is always ran on the local server.

at command in ubuntu apache error 'You do not have permission to use at'

I am pretty new at php and ubuntu. I have 2 servers set up, one for development and one for staging. On the dev machine I can use the at command without a problem, but on staging I get a permissions error. The at.deny (and at.allow) files are identical, so it must be another permissions issue.
Any clues?
I see that on the staging server I can only use at command as root. How can I fix this to be able to use the at command as www-data? Again... I checked the at.allow and at.deny files ... they are not the problem here.
1) Check if you have file /etc/at.allow.
If it exists - just add your user in new line.
If not exists - try to find your user in /etc/at.deny and remove/comment it.
2) Restart "at" daemon:
sudo atd restart
3) Check:
at -l
or
sudo -u myuser at -l
The error should not be output.

AIX script hangs when using /dev/null > 2>&1

I am trying to run a script in AIX to execute another script on a remote server. In addition to the remote script i need to send the stdout to /dev/null. The same command works fine on another server but when I run on the current server it hangs, any advice?
su - test -c "rsh testserver /scripts/testme" 2>&1 >/dev/null1
In your comment you write that a menu is presented when the user logins.
Let's say this is done in the .profile file, using echoes and a read command.
When a menu is presented, the read command in the menu code will not be skipped by redirecting the output. The menu still waits for your input and the su command seems to hang.
Can you change your .profile or .bashrc so that it will skip presenting the menu when called using a su command? When this is called during startup, you can look at the returncode of tty. When you use the su command from the commandline, you should look for another solution.
When your root shell is ksh, you can try the following:
if [[ "$(ps -fp $$)" != *"-ksh -c "* ]]; then
echo "Now I should call the Menu"
fi

Unable to run a postgresql script from bash

I am learning the shell language. I have creating a shell script whose function is to login into the DB and run a .sql file. Following are the contents of the script -
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
echo "Running SQL Dump - auto_qa_db_sync"
\\i auto_qa_db_sync.sql
After running the above script, I get the following error
./autoqa_script.sh: 39: ./autoqa_script.sh: /i: not found
Following one article, I tried reversing the slash but it didn't worked.
I don't understand why this is happening. Because when I try manually running the sql file, it works properly. Can anyone help?
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production and run script"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT -f auto_qa_db_sync.sql
The lines you put in a shell script are (moreless, let's say so for now) equivalent to what you would put right to the Bash prompt (the one ending with '$' or '#' if you're a root). When you execute a script (a list of commands), one command will be run after the previous terminates.
What you wanted to do is to run the client and issue a "\i ./autoqa_script.sh" comand in it.
What you did was to run the client, and after the client terminated, issue that command in Bash.
You should read about Bash pipelines - these are the way to run programs and input text inside them. Following your original idea to solving the problem, you'd write something like:
echo '\i auto_qa_db_sync.sql' | $DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
Hope that helps to understand.