I am trying to write a simple script in Ksh to look at failed attempts and look at the ip addresses. I am dumping a lastb output to a file and want to get just the username tried and the ip address it came from.
the lastb output
user ssh:notty 143.244.175.142 Mon Jun 21 01:04 - 01:04 (00:00)
user ssh:notty 143.244.175.142 Mon Jun 21 00:57 - 00:57 (00:00)
my script looks like this
#!/usr/bin/ksh
FailedLogins="$HOME/sshattempts.txt
if [ -e "$FailedLogins" ]
then
echo "yes file exists"
fi
# while loop
while IFS= read user tty ip
do
printf "$user $ip"
done <"$FailedLogins"
You could use something like this
#!/bin/ksh
if [[ -r $HOME/sshattempts.txt ]]; then
print "yes file $HOME/sshattempts.txt exists"
else
exit
fi
while read line; do
parts=( $line )
print "${parts[0]} ${parts[2]}"
done < $HOME/sshattempts.txt
During processing each line will be parsed into 10 separate (space delimited) fields. When there are not enough variables (3 in this case) the read will stuff the 'rest of the line' into the last variable.
Processing the first line from sshattempts.txt will cause the following variable assignments:
user='user'
tty='ssh:notty'
ip='143.244.175.142 Mon Jun 21 01:04 - 01:04 (00:00)'
While you can edit the code to provide the read with 10 variables, you can make use of the 'stuff rest of line into last variable' rule by adding one more variable to hold all of the fields that come after the ip field, eg:
while IFS= read user tty ip rest_of_line
Now when processing the first line from sshattempts.txt you should see:
user='user'
tty='ssh:notty'
ip='143.244.175.142'
rest_of_line='Mon Jun 21 01:04 - 01:04 (00:00)'
First and third entry of a line:
awk '{print $1, $3}' "$FailedLogins"
Related
I have a lot script in which it is reading the input parameter. I need to increment all the input parameter by 1. So in case if it 1 it need to be 2 & 2 need to 3 etc. Is there any unix command which can replace this in one go for all scripts instead of going to each scripts and doing it manually. I am new to Unix not sure if there is any way to do it. Below is the example. Appreciate any help on this.
Before Changes
#!/bin/ksh
hv_bus_date="CAST('$1' AS DATE FORMAT 'YYYYMMDD')"
hv_octs_sys_wid=$2
hv_act_id=$3
After Changes
#!/bin/ksh
hv_bus_date="CAST('$2' AS DATE FORMAT 'YYYYMMDD')"
hv_octs_sys_wid=$3
hv_act_id=$4
EDIT: To save output into Input_file(s) itself without a GNU awk try following(but make sure you run my 1st solution to see if output is looking correct).
Let's say we have scripts whose values we need to change.
-rw-rw-r-- 1 ubuntu ubuntu 83 Dec 8 14:34 file2.ksh
-rw-rw-r-- 1 ubuntu ubuntu 83 Dec 8 14:34 file1.ksh
Our awk script's name is script.ksh to make sure this script is NOT coming under its own radar :)
cat script.ksh
awk '
FNR==1{
close(out)
out="out"++count
rename=(rename?rename ORS:"") "mv " out OFS FILENAME
}
match($0,/\$[0-9]+/){
before=substr($0,1,RSTART)
value=substr($0,RSTART+1,RLENGTH)
rest=substr($0,RSTART+RLENGTH)
if(value<9){
value++
}
print before value rest > (out)
}
END{
if(rename){
system(rename)
}
}
' file*sh
Now when we run the above script by doing ./script.ksh and we can one of the file named file1.ksh see its contents have been changed now as follows.
BEFORE:
cat file1.ksh
hv_bus_date="CAST('$1' AS DATE FORMAT 'YYYYMMDD')"
hv_octs_sys_wid=$2
hv_act_id=$3
AFTER:
cat file1.ksh
hv_bus_date="CAST('$2' AS DATE FORMAT 'YYYYMMDD')"
hv_octs_sys_wid=$3
hv_act_id=$4
1st solution: Could you please try following, considering that your script names are ending with .sh extensions and this will NOT save output into Input_file(s), it will print output on terminal only for you to check output if its coming fine.
awk '
match($0,/\$[0-9]+/){
before=substr($0,1,RSTART)
value=substr($0,RSTART+1,RLENGTH)
rest=substr($0,RSTART+RLENGTH)
if(value<9){
value++
}
print before value rest
}
' *sh
2nd solution(With only newer versions of GNU awk): Once you are happy with results of above command and if you have gawk command then try following, this should save output into Input_file(s) itself. IMHO this needs gawk 4.1.0 + version.
awk -i inplace -v INPLACE_SUFFIX=.bak '
match($0,/\$[0-9]+/){
before=substr($0,1,RSTART)
value=substr($0,RSTART+1,RLENGTH)
rest=substr($0,RSTART+RLENGTH)
if(value<9){
value++
}
print before value rest
}
' *sh
I would like to get string which comes after matching pattern and exclude everything else. For example say,
Nov 17 21:52:06 web01-san roundcube: <he1v330n> User dxxssjksdfd [121.177.26.200]; \
Message for undisclosed-recipients:, stanpiatt#yahoo.com
Nov 17 21:48:26 web01-san roundcube: <fqu8k29l> User cxcnjdfdssd [121.177.26.200]; \
Message for undisclosed-recipients:, stanpiatt#yahoo.com
So I would like to get ONLY string after pattern User and exclude everything else, so output should be
User dxxssjksdfd
User cxcnjdfdssd
I've tried grep -Po 'User\K[^\s]*' but it doesn't give what I want. How can I do that ?
$ cat infile
Nov 17 21:52:06 web01-san roundcube: <he1v330n> User dxxssjksdfd [121.177.26.200]; \
Message for undisclosed-recipients:, stanpiatt#yahoo.com
Nov 17 21:48:26 web01-san roundcube: <fqu8k29l> User cxcnjdfdssd [121.177.26.200]; \
Message for undisclosed-recipients:, stanpiatt#yahoo.com
Using grep
$ grep -Po 'User [^\s]*' infile
User dxxssjksdfd
User cxcnjdfdssd
Using awk
$ awk 'match($0,/User [^ ]*/){ print substr($0, RSTART,RLENGTH)}' infile
User dxxssjksdfd
User cxcnjdfdssd
Using GNU awk
$ awk 'match($0,/User [^ ]*/,arr){ print arr[0]}' infile
User dxxssjksdfd
User cxcnjdfdssd
Explanation:
/User [^\s]*/
User matches the characters User literally (case sensitive)
Match a single character not present in the list below [^\s]*
* Quantifier — Matches between zero and unlimited times, as many times as possible, giving back as needed (greedy)
\s matches any whitespace character (equal to [\r\n\t\f\v ])
Solution 1st:
Following awk should be helping you in same.
awk -v RS=" " '/User/{getline;print "User",$0}' Input_file
Output will be as follows.
User dxxssjksdfd
User cxcnjdfdssd
Solution 2nd: You could use following too by going through the fields of line too.
awk '{for(i=1;i<=NF;i++){if($i ~ /User/){print $i,$(i+1)}}}' Input_file
Solution 3rd: By using sub utility of awk here too.
awk 'sub(/.*User/,""){print "User",$1}' Input_file
i have installed a witty pi 2 on my RPI3
But i want to export the temp from it to a spcific file
i can run a script called witty.sh and then i need to press 8 or Ctrl + C to quit
>>> Current temperature: 33.50°C / 92.3°F
>>> Your system time is: Sat 01 Jul 2017 20:29:46 CEST
>>> Your RTC time is: Sat 01 Jul 2017 20:29:46 CEST
Now you can:
1. Write system time to RTC
2. Write RTC time to system
3. Synchronize time
4. Schedule next shutdown [25 15:04:00]
5. Schedule next startup [25 15:05:00]
6. Choose schedule script
7. Reset data...
8. Exit
What do you want to do? (1~8)
All i want is to export the first line.
I tried
sudo ./wittyPi.sh | grep Current | awk '{ print $4 }' > temp.log
but it´s ask me for a number and then give the temp in temp.log
Is it possible to insert some extra code to generete Ctrl + C or sometinhg in the end ?
Just use a here string to provide the input:
$ cat tst.sh
echo "Type something:" >&2
read foo
echo "$foo"
$ ./tst.sh <<<stuff | sed 's/u/X/'
Type something:
stXff
and if your shell doesn't support here strings then use a here document instead:
$ ./tst.sh <<EOF | sed 's/u/X/'
> stuff
> EOF
Type something:
stXff
So you'd do (you never need grep when you're using awk):
sudo ./wittyPi.sh <<<8 | awk '/Current/{ print $4 }' > temp.log
or:
sudo ./wittyPi.sh <<<8 | awk 'NR==1{ print $4 }' > temp.log
Maybe a better way is to take a look at the get_temperature() function in the "utilities.sh" file, and see how it is implemented. It only involves some I2C communications.
I have a /redir directory on a website where a .htaccess file redirects various static addresses to other addresses for the purposes of counting the number of times a particular link is accessed. I want to write a script to help count that data.
I already have two scripts in place. The first appends data to a log.total file from the access.log.0 file at about 2:00 AM daily via a cron job. The second is a script that can be run interactively to run generate the counts, given a minimum and maximum date.
The cron script:
#!/bin/bash
rm -f log.tmp
grep "GET /redir/.*" access.log.0 | cut -d " " -f4,5,7 > log.tmp
cat log.tmp >> log.total
rm log.tmp
This generates data that looks like:
[21/Aug/2012:00:31:27 -0700] /redir/abc.html
[21/Aug/2012:00:31:35 -0700] /redir/def.html
[21/Aug/2012:00:31:35 -0700] /redir/abc.html
[21/Aug/2012:00:31:40 -0700] /redir/ghi.html
[21/Aug/2012:00:31:46 -0700] /redir/123.html
[21/Aug/2012:00:31:58 -0700] /redir/def.html
[21/Aug/2012:00:32:07 -0700] /redir/abc.html
etc...
Now, I want a script that I can run using readLogs.sh "log.total" "1 week ago" "today" which will count the number of times each file is accessed between one week ago and today.
I've posted my script below which does the job, but there are some limitations, which are outlined there. The output can be in any readable format.
It's easier if you convert the dates to UNIX timestamps for the range comparisons. You could add them as a second field to your file:
[21/Aug/2012:00:31:27 -0700] 1345534287 /redir/abc.html
(You can get the UNIX timestamp using date +%s --date "date string". I assume you would like to keep the human readable timestamp, but you could replace it with the timestamp if you want.)
Here's a modified script that assumes your log file is modified as suggested; the script also uses bash parameter expansion to make it a bit shorter:
[Update: modified to exit once the ending timestamp is reached.]
#!/bin/bash
# :- means to use the RHS if the LHS is null or unset
FILE="${1:-log.total}"
MINTIME="${2:-1 day ago}"
MAXTIME="${3:-now}"
START=$( date +%s --date "$MINTIME" )
END=$( date +%s --date "$MAXTIME" )
# No need for cut; just have awk print only the field you want
# Field 1 is the date/time
# Field 2 is the timezone
# Field 3 is the timestamp you added
# Field 4 is the path
awk -v start=$START -v end=$END '$3 > end { exit } $3 >= start {print $4}' "$FILE" | \
sort | uniq -c | sort
Here's the script I came up with. The limitations are that if a date entered does not appear in the logs, it doesn't work properly. For example, if I enter "1 day ago" as the start date but there were no accesses from yesterday, it will choose the beginning of the file as where to start counting.
#!/bin/bash
if [ "$1" ]; then
FILE="$1"
else
FILE="log.total"
fi
#if test -t 0; then
#INPUT=`cat $FILE`
#else
#INPUT="$(cat -)"
#fi
if [ "$2" ]; then
MINTIME="$2"
else
MINTIME="1 day ago"
fi
if [ "$3" ]; then
MAXTIME="$3"
else
MAXTIME="now"
fi
START=`grep -m 1 -n $(date --date="$MINTIME" +%d/%b/%Y) "$FILE" | cut -d: -f1`
if [ -z "$START" ]; then
START=0
fi
END=`grep -m 1 -n $(date --date="$MAXTIME" +%d/%b/%Y) "$FILE" | cut -d: -f1`
if [ -z "$END" ]; then
END=`wc "$FILE" | cut -d" " -f3`
fi
awk "NR>=$START && NR<$END {print}" "$FILE" | cut -d" " -f3 | sort | uniq -c | sort
The output looks like this:
1 /redir/123.html
1 /redir/ghi.html
2 /redir/def.html
3 /redir/abc.html
I currently have a request to build a shell script to get some data from the table using SQL (Oracle). The query which I'm running return a number of rows. Is there a way to use something like result set?
Currently, I'm re-directing it to a file, but I'm not able to reuse the data again for the further processing.
Edit: Thanks for the reply Gene. The result file looks like:
UNIX_PID 37165
----------
PARTNER_ID prad
--------------------------------------------------------------------------------
XML_FILE
--------------------------------------------------------------------------------
/mnt/publish/gbl/backup/pradeep1/27241-20090722/kumarelec2.xml
pradeep1
/mnt/soar_publish/gbl/backup/pradeep1/11089-20090723/dataonly.xml
UNIX_PID 27654
----------
PARTNER_ID swam
--------------------------------------------------------------------------------
XML_FILE
--------------------------------------------------------------------------------
smariswam2
/mnt/publish/gbl/backup/smariswam2/10235-20090929/swam2.xml
There are multiple rows like this. My requirement is only to use shell script and write this program.
I need to take each of the pid and check if the process is running, which I can take care of.
My question is how do I check for each PID so I can loop and get corresponding partner_id and the xml_file name? Since it is a file, how can I get the exact corresponding values?
Your question is pretty short on specifics (a sample of the file to which you've redirected your query output would be helpful, as well as some idea of what you actually want to do with the data), but as a general approach, once you have your query results in a file, why not use the power of your scripting language of choice (ruby and perl are both good choices) to parse the file and act on each row?
Here is one suggested approach. It wasn't clear from the sample you posted, so I am assuming that this is actually what your sample file looks like:
UNIX_PID 37165 PARTNER_ID prad XML_FILE /mnt/publish/gbl/backup/pradeep1/27241-20090722/kumarelec2.xml pradeep1 /mnt/soar_publish/gbl/backup/pradeep1/11089-20090723/dataonly.xml
UNIX_PID 27654 PARTNER_ID swam XML_FILE smariswam2 /mnt/publish/gbl/backup/smariswam2/10235-20090929/swam2.xml
I am also assuming that:
There is a line-feed at the end of
the last line of your file.
The columns are separated by a single
space.
Here is a suggested bash script (not optimal, I'm sure, but functional):
#! /bin/bash
cat myOutputData.txt |
while read line;
do
myPID=`echo $line | awk '{print $2}'`
isRunning=`ps -p $myPID | grep $myPID`
if [ -n "$isRunning" ]
then
echo "PARTNER_ID `echo $line | awk '{print $4}'`"
echo "XML_FILE `echo $line | awk '{print $6}'`"
fi
done
The script iterates through every line (row) of the input file. It uses awk to extract column 2 (the PID), and then does a check (using ps -p) to see if the process is running. If it is, it uses awk again to pull out and echo two fields from the file (PARTNER ID and XML FILE). You should be able to adapt the script further to suit your needs. Read up on awk if you want to use different column delimiters or do additional text processing.
Things get a little more tricky if the output file contains one row for each data element (as you indicated). A good approach here is to use a simple state mechanism within the script and "remember" whether or not the most recently seen PID is running. If it is, then any data elements that appear before the next PID should be printed out. Here is a commented script to do just that with a file of the format you provided. Note that you must have a line-feed at the end of the last line of input data or the last line will be dropped.
#! /bin/bash
cat myOutputData.txt |
while read line;
do
# Extract the first (myKey) and second (myValue) words from the input line
myKey=`echo $line | awk '{print $1}'`
myValue=`echo $line | awk '{print $2}'`
# Take action based on the type of line this is
case "$myKey" in
"UNIX_PID")
# Determine whether the specified PID is running
isRunning=`ps -p $myValue | grep $myValue`
;;
"PARTNER_ID")
# Print the specified partner ID if the PID is running
if [ -n "$isRunning" ]
then
echo "PARTNER_ID $myValue"
fi
;;
*)
# Check to see if this line represents a file name, and print it
# if the PID is running
inputLineLength=${#line}
if (( $inputLineLength > 0 )) && [ "$line" != "XML_FILE" ] && [ -n "$isRunning" ]
then
isHyphens=`expr "$line" : -`
if [ "$isHyphens" -ne "1" ]
then
echo "XML_FILE $line"
fi
fi
;;
esac
done
I think that we are well into custom software development territory now so I will leave it at that. You should have enough here to customize the script to your liking. Good luck!