I have a question that I cannot seem to find an answer for. With Windows, you could ping a given domain name, and capture the ip as a variable, even if no reply was recieved. I believe the command looked something like this:
ping domain.com for /f "tokens=1,2 delims=[]" %%A in ('ping /n 1 /w 1 domain.com ^| find "Pinging"') do set ipaddress=%%B"
This is exactly what I am trying to do with a bash script rather than a batch file. I've stumbled across alot of questions that are really close, but not quite what I am looking for. I cannot seem to figure out the best way to go about this. How can I capture IP address using bash script?
PS: I'm still fairly new to linux environment.
I am pulling the answer out of the question and posting it here for future users
ip=$(ping -c 1 $input | gawk -F'[()]' '/PING/{print $2}')
You can use the ip variable in your script at this point.
As an example, when pinging google.com you get this result:
echo $(ping -c 1 google.com | gawk -F'[()]' '/PING/{print $2}')
74.125.225.96
And at this point the ip variable would contain 74.125.225.96
Related
I am looking for a way using ssh-keyscan to possibly define a port within the keyscan file specified with the -f flag instead of having to specify it on the command line.
The following is how I currently do it:
/usr/bin/ssh-keyscan -f /home/ansible/.ssh/test-scanner-keyscan_hosts -p 16005 >> /home/ansible/.ssh/known_hosts;
Contents of the keyscan file:
mainserver1.org,shortname1,shortname2
mainserver2.org,shortname1,shortname2
The issue is, each "mainserver" has a unique ssh port that is different from the others. While this will cause mainserver1 to work, since it's port is 16005, mainserver2 will fail because it's port is 17005. The only way around it currently is to try to do each ssh-keyscan command separately and specifying each different port such that it works.
Instead, I want to be able to specify them within the file, and/or utilize a method that allows for a scanning of a list allowing for unique ports. The issue is, there doesn't seem to be any way to do that.
I tried the following within the keyscan file, and it does not work:
mainserver1.org:16005,shortname1,shortname2
mainserver2.org:17005,shortname1,shortname2
Is there any way to make this work, or any way other than ssh-keyscan, or some other way within ansible to make this function like I hope it does? Otherwise, I have to do an ssh-keyscan task for EVERY server because the ssh ports are all different.
Thanks in advance!
You're actually welcome to use that format, and then use it to drive the actual implementation since ssh-keyscan -f accepts "-" to read from stdin; thus:
scan_em() {
local fn
local port
fn="$1"
for port in $(grep -Eo ':[0-9]+' $fn | sed s/:// | sort | uniq); do
sed -ne "s/:${port}//p" $fn | ssh-keyscan -f - -p $port
done
}
scan_em /home/ansible/.ssh/test-scanner-keyscan_hosts >> /home/ansible/.ssh/known_hosts
I have ~100 accounts on a dedicated VPS. I'd like to see a list of all accounts, ordered by who has the biggest mail directory. I do not need to see details of individual mail accounts; just want to identify the cpanel accounts with the biggest Mail directory by KB. Hope that's clear! I have hunted around but haven't found an explicit answer to this question.
(I'm capable at running command lines in SSH but not an expert. I'd appreciate a clear and complete answer, if one exists. I like to understand what each part of the command is doing).
Thanks for you help :)
You can execute this line of piped commands in a WHM/cPanel server via SSH:
find /home -maxdepth 2 -type d | grep "/mail$" | xargs du -s | sort -n -r
where:
find /home -maxdepth 2 -type d: Will find all directories inside /home
folder (usually the cpanel accounts folder) but only 2 levels of directories as maximum below the /home, like /home/account1/mail or /home/account1/public_html.
grep "/mail$": Filter the previous find result selecting only the directories that ends in /mail string, like /home/account1/mail or /home/account2/mail. cPanel use that mail directory to store emails accounts.
xargs du -s: Execute a calculation of the size of each one of the directories filtered by the previous grep.
sort -n -r: Sorts the output of the previous xargs command by number in reverse mode, showing the biggest first.
Let's say.
I have one file with the name of the computer and some other information.
E.g.
Computer1
There's another file with the ip address and some other information.
192.168.100.2
I have 2 greps for example:
grep -i computer /etc/hosts
grep -i ips /etc/hosts
They give me answers like
Computer1
19.168.100.2
Well, I would like to get a file with headers and the information organized as this:
Name
Ip
oser1313
19.168.100.1
I'm quite lost I have no idea how could I format this I usually copy-paste it on Excel but I don't want to do it anymore and since I have to do this on several computers from a server It would be great if I can format it.
Just do something like this:
awk '
{ lc = tolower($0) }
lc ~ /computer/ { name = $0 }
lc ~ /ips/ { ip = $0 }
END {
print "Name", "Ip"
print name, ip
}
' /etc/hosts
The above is untested since you didn't provide a sample input file to test with and it's just mimicing what your grep commands do but there may be a better way to do it if we knew what your input looked like.
I suppose that your two files have the same number of lines and that line numbers match between one file and the other: if oser1313 is line n in the output of grep from /etc/hosts then same for 19.168.100.1 in /etc/hosts.
So it turns pretty simple as bash script:
grep -i computer /etc/hosts > part1.dat
grep -i ips /etc/hosts > part2.dat
echo "Name,IP" > out.dat
paste -d"," part1.dat part2.dat >> out.dat
rm part1.dat part2.dat
Or a oneliner, as suggested in comments:
printf "Name,IP\n$(grep -i computer /etc/hosts),$(grep -i ips /etc/hosts)\n" > out.dat
I connected to my ubuntu server from my MacBook using SSH.
I want to know ip address of MacBook from the server.
How can I do that?
[edit] I would like to get ip using bash.
I would comment this but I cant. We need more information. What language are you programming with. What have you tried.
Edit: Here is what you are looking for. This answer was taken from Find the IP address of the client in an SSH session
Please do a more through search for your problem before posting a question
Check if there is an environment variable called:
$SSH_CLIENT
OR
$SSH_CONNECTION
(or any other environment variables) which gets set when the user logs in. Then process it using the user login script.
Extract the IP:
$ echo $SSH_CLIENT | awk '{ print $1}'
1.2.3.4
$ echo $SSH_CONNECTION | awk '{print $1}'
1.2.3.4
I'm a grep and sed newbie, and have read through a bunch of answers on SO referring to grepping IPs in apache logs with no luck for my particular situation.
I have megs of error logs from bots and nefarious humans hitting a site, and I need to search through the logs and find the most common IPs so I can confirm they're bad and block them in .htaccess.
But, my error logs don't have the IP as the first item on the line as it seems most Apache logs do, according to the other answers here on SO. In my logs, the IP is within each line and in the format [client 123.456.78.90].
This older answer is exactly what I need, I think, Grepping logs for IP adresses as it "will print each IP... sorted prefixed with the count."
But according to the answerer, "It assumes the IP-address is the first thing on each line."
How can I modify the sed command from that answer for the IP format [client 123.456.78.90] rather than the IP on the first line of each log entry?
sed -e 's/\([0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+\).*$/\1/' -e t -e d access.log | sort | uniq -c
8/25/14 This works re: Kent's answer below:
grep -o '[0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+' logfile|sort|uniq -c
Update 9/02/14
To sort by number of occurrences of each IP;
grep -o '[0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+' logfile|sort -n | uniq -c | sort -rn
grep is for Globally finding Regular Expressions on individual lines and Printing them (G/RE/P get it?).
sed is for Stream EDiting (SED get it?), i.e. making simple substitutions on individual lines.
For any other general text manipulation (including anything that spans multiple lines) you should use awk (named after 3 guys who ran out of imagination for naming tools).
awk '
match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/) { cnt[substr($0,RSTART,RLENGTH)]++ }
END { for (ip in cnt) print cnt[ip], ip }
' logfile
dirty and quick :
grep -o '[0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+' logfile|sort|uniq -c
a big diff between sed and grep is: sed can change the input text (like substitution), but grep can't. :-)