Applescript: Running terminal commands and Saving output - variables

So, I'm trying to write a script to remove wireless networks and their associated keychain credentials.
tell application "Terminal"
activate
string mywifi
set mywifi to "test"
set mywifi to do script ("networksetup -listallhardwareports | grep -A 1 'Wi-Fi' | grep -v 'Hardware' | sed -e 's/'Device:\ '//g'")
do script "networksetup -removepreferredwirelessnetwork $mywifi NETWORK1"
do script "security delete-generic-password NETWORK1"
delay 2
#do script "networksetup -removepreferredwirelessnetwork $mywifi NETWORK2"
#do script "security delete-generic-password Network2"
delay2
#do script "networksetup -removepreferredwirelessnetwork $mywifi Network3"
#do script "security delete-generic-password Network3"
delay 2
#do script "networksetup -removepreferredwirelessnetwork $mywifi Network4"
#do script "security delete-generic-password Network4"
delay 2
#do script "networksetup -removepreferredwirelessnetwork $mywifi Network5"
#do script "security delete-generic-password Network5"
delay 2
end tell
quit
Where i'm running into trouble is setting that variable with the output of that command. The command runs in terminal, though whenever I attempt to compile it, the following error is thrown
Syntax Error: Expected """ but found unknown token
It finds this right after /device:\ ' between the \ '
I have not been able to figure out what is missing. If i add " between them it just drops the terminal to >
Straight up my first foray into applescript but not my first language. I think i've been staring at it too long.

This is not intended to be a complete answer, because I'm not too familiar with the security command, although I beleive you'll need to preface it with the sudo command to modify the keychain. There's lots of good info on the Internet covering security command. I read quite a bit but there's nothing in my keychain I want to remove so I'm not able to test in this respect.
The following covers the networksetup command, which you may also need to use sudo, in Terminal, but in AppleScript add with administrator privileges at the end of the do shell script command", if necessary when preferred wireless networks.
It looks like your trying to remove all preferred wireless networks and this can be done directly in Terminal with the following command:
networksetup -removeallpreferredwirelessnetworks $(networksetup -listallhardwareports | awk '/Wi-Fi/{getline; print $2}')
If you want to do the same thing in AppleScript, e.g:
do shell script "networksetup -removeallpreferredwirelessnetworks $(networksetup -listallhardwareports | awk '/Wi-Fi/{getline; print $2}')"
If you want to use it in a bash script, then use the line above with a bash shebang, e.g.:
#!/bin/bash
networksetup -removeallpreferredwirelessnetworks $(networksetup -listallhardwareports | awk '/Wi-Fi/{getline; print $2}')
Save it in a plain text file without an extension, and make it executable, e.g. chmod u+x filename where filename is whatever you saved it as. Then to use it in Terminal, ./filename or /path/to/filename, if it's not saved in a location defined within the PATH environment variable.
If your looking to do it just for specific networks, then instead of a line for each network, you can loop through a list. If you need help with that, let us know.

Related

Unable to exit from SSH when executed from TCLSH

I have hard requirement of logging into a terminal via SSH from TCL console and relaunch a tcl script from that terminal. For this I use exec command and it does get executed. The only problem is it doesn't return back to parent code.
I have automated SSH login and it works fine from a bash/csh terminal
But from TCL console, the following happens
Simple example
exec ssh hostname pwd
puts "Done"
When I execute this code in TCL, "Done" never gets printed. I just get the output of pwd and that's it.
I have a need of looping SSH into multiple terminals and run TCL jobs on a hardware, but the loop gets stuck after executing the first SSH.
I search the internet for answers and I am not able to find any. Please help.
There could be a lot issues going on here. Running ssh with an explicit command (pwd) will usually default to not allocating a tty (ssh -T) and will run the remote shell in non-interactive mode. And the output of a command called from exec is not normally echoed to standard output, so I would not expect you to see the output if you call it from a script. You have to print the result of exec to see the output of the pwd command. Also, different shell startup scripts are run on the remote host depending on which shell the account is set up with and whether it is an interactive or non-interactive shell. It could be .bashrc, .bash_profile, .profile, .cshrc, etc., and if the script behaves differently when it has a tty vs. when it doesn't, that could explain differing behavior between a bash/csh shell and the TCL console.
Without having access to your system, it is hard for me to troubleshoot. I would start with a script like this:
set result [exec ssh -T hostname pwd]
puts "result = $result"
puts "Done."
Then I would try changing the -T to a -t and trying again. If the output of "pwd" is appearing before the "result =" line, then you can tell that the command is writing the result to a tty instead of standard output, and that's useful information for troubleshooting.

Script Stops after doing SSH

When I am doing SSH to some machine inside the for loop it is doing the ssh but not able to execute further.
Code is like:
string=c01.test.cloud.com,c02.test.cloud.com
for i in $(echo $string | sed "s/,/ /g")
do
ssh -t -t AppAccount#$i
cd a/b/c
str2=x,y,z
done
I take it from your question that you expect cd a/b/c to run on a remote server? That's not what this script is doing. The call to ssh opens an SSH tunnel, and provides you an interactive terminal connection. It then waits for that connection to terminate. (I suspect if you pressed Control-D, the script would continue.) Your use of -t -t here is particularly strange. Why do you want to force a remote pty? This is making the problem worse (not that much, since it won't work anyway, but this seems the opposite of what you'd want).
I think this is the script you meant:
string=c01.test.cloud.com,c02.test.cloud.com
for i in $(echo $string | sed "s/,/ /g")
do
ssh AppAccount#$i 'cd a/b/c; str2=x,y,z'
done
(This won't do anything of course, but I assume your real script has more to it than setting a shell variable and exiting.) The point is that you ned to pass the script you want to run as a parameter to ssh. Otherwise it's going to spawn an interactive shell and wait for you to close it.
Note that if your script is very complicated, it can be very inconvenient to stick it all in a single-quoted string. If your internal script is in its own file, a simple way to handle this is with bash -s which reads a script from stdin:
cat some_script | ssh server 'bash -s'
You can also use bash Here docs to achieve the same thing, but that is likely getting too fancy for this use.

Unable to run a postgresql script from bash

I am learning the shell language. I have creating a shell script whose function is to login into the DB and run a .sql file. Following are the contents of the script -
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
echo "Running SQL Dump - auto_qa_db_sync"
\\i auto_qa_db_sync.sql
After running the above script, I get the following error
./autoqa_script.sh: 39: ./autoqa_script.sh: /i: not found
Following one article, I tried reversing the slash but it didn't worked.
I don't understand why this is happening. Because when I try manually running the sql file, it works properly. Can anyone help?
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production and run script"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT -f auto_qa_db_sync.sql
The lines you put in a shell script are (moreless, let's say so for now) equivalent to what you would put right to the Bash prompt (the one ending with '$' or '#' if you're a root). When you execute a script (a list of commands), one command will be run after the previous terminates.
What you wanted to do is to run the client and issue a "\i ./autoqa_script.sh" comand in it.
What you did was to run the client, and after the client terminated, issue that command in Bash.
You should read about Bash pipelines - these are the way to run programs and input text inside them. Following your original idea to solving the problem, you'd write something like:
echo '\i auto_qa_db_sync.sql' | $DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
Hope that helps to understand.

Redirect stderr to stdout in C shell

When I run the following command in csh, I got nothing, but it works in bash.
Is there any equivalent in csh which can redirect the standard error to standard out?
somecommand 2>&1
The csh shell has never been known for its extensive ability to manipulate file handles in the redirection process.
You can redirect both standard output and error to a file with:
xxx >& filename
but that's not quite what you were after, redirecting standard error to the current standard output.
However, if your underlying operating system exposes the standard output of a process in the file system (as Linux does with /dev/stdout), you can use that method as follows:
xxx >& /dev/stdout
This will force both standard output and standard error to go to the same place as the current standard output, effectively what you have with the bash redirection, 2>&1.
Just keep in mind this isn't a csh feature. If you run on an operating system that doesn't expose standard output as a file, you can't use this method.
However, there is another method. You can combine the two streams into one if you send it to a pipeline with |&, then all you need to do is find a pipeline component that writes its standard input to its standard output. In case you're unaware of such a thing, that's exactly what cat does if you don't give it any arguments. Hence, you can achieve your ends in this specific case with:
xxx |& cat
Of course, there's also nothing stopping you from running bash (assuming it's on the system somewhere) within a csh script to give you the added capabilities. Then you can use the rich redirections of that shell for the more complex cases where csh may struggle.
Let's explore this in more detail. First, create an executable echo_err that will write a string to stderr:
#include <stdio.h>
int main (int argc, char *argv[]) {
fprintf (stderr, "stderr (%s)\n", (argc > 1) ? argv[1] : "?");
return 0;
}
Then a control script test.csh which will show it in action:
#!/usr/bin/csh
ps -ef ; echo ; echo $$ ; echo
echo 'stdout (csh)'
./echo_err csh
bash -c "( echo 'stdout (bash)' ; ./echo_err bash ) 2>&1"
The echo of the PID and ps are simply so you can ensure it's csh running this script. When you run this script with:
./test.csh >test.out 2>test.err
(the initial redirection is set up by bash before csh starts running the script), and examine the out/err files, you see:
test.out:
UID PID PPID TTY STIME COMMAND
pax 5708 5364 cons0 11:31:14 /usr/bin/ps
pax 5364 7364 cons0 11:31:13 /usr/bin/tcsh
pax 7364 1 cons0 10:44:30 /usr/bin/bash
5364
stdout (csh)
stdout (bash)
stderr (bash)
test.err:
stderr (csh)
You can see there that the test.csh process is running in the C shell, and that calling bash from within there gives you the full bash power of redirection.
The 2>&1 in the bash command quite easily lets you redirect standard error to the current standard output (as desired) without prior knowledge of where standard output is currently going.
I object the above answer and provide my own. csh DOES have this capability and here is how it's done:
xxx |& some_exec # will pipe merged output to your some_exec
or
xxx |& cat > filename
or if you just want it to merge streams (to stdout) and not redirect to a file or some_exec:
xxx |& tee /dev/null
As paxdiablo said you can use >& to redirect both stdout and stderr. However if you want them separated you can use the following:
(command > stdoutfile) >& stderrfile
...as indicated the above will redirect stdout to stdoutfile and stderr to stderrfile.
xxx >& filename
Or do this to see everything on the screen and have it go to your file:
xxx | & tee ./logfile
What about just
xxx >& /dev/stdout
???
I think this is the correct answer for csh.
xxx >/dev/stderr
Note most csh are really tcsh in modern environments:
rmockler> ls -latr /usr/bin/csh
lrwxrwxrwx 1 root root 9 2011-05-03 13:40 /usr/bin/csh -> /bin/tcsh
using a backtick embedded statement to portray this as follows:
echo "`echo 'standard out1'` `echo 'error out1' >/dev/stderr` `echo 'standard out2'`" | tee -a /tmp/test.txt ; cat /tmp/test.txt
if this works for you please bump up to 1. The other suggestions don't work for my csh environment.

Opening multiple shells with tcsh script

Currently working with kde3.5
Here is what I would eventually like to do to help my workflow:
Have a script that:
Opens multiple konsole shells
Renames each shell
This is what I have so far:
#!/bin/tcsh -fv
set KPID =ps -ef | grep konsole | grep -v grep | awk '{print $2}'| tr "\n" " "
dcop konsole-$KPID konsole newSession
The dcop command works just fine in command line (substituting variable for actual pid) but when I run it through the script, it gives 'object not accessible' error. No other errors present.
I've made sure permissions are ok (777) and even added sudo with it, but no luck.
As per second part again I have it working on command line:
dcop $KONSOLE_DCOP_SESSION renameSession "name"
This however only works for the active (working) shell and am not sure how to get it to do it for the others. I have not put this part in script yet as I am still working on the first part. Any suggestions would be great.
Thanks.
If it's a script, it doesn't need to be tcsh. see http://www.grymoire.com/Unix/CshTop10.txt
But if you want to pass $KPID into your script, use $1 in your script argument #1), and call it with
script $KPID