I would like to specify these things:
Domain|Username|Password|Contact Email|Package|IP Address
Any advice would be appreciated.
Here is my scripts:
! /bin/bash
make a file (/root/test1) which contains domain names, username and password.
for i in cat /root/test1 do domain=echo $i | cut -f1 -d: un=echo $i | cut -f2 -d: pw=echo $i | cut -f3 -d: /scripts/wwwacct $domain $un $pw 0 done
Try to use following command in your scripts.
/scripts/wwwacct domain.com newuser password 0 x3 n n n 0 0 0 0 0 0
The above creates a new account with the domain called domain.com
It sets their username to new user
It sets their pasword to password
Quota is unlimited
Theme is x3
Dedicated IP is not assigned
CGI is turned off
FrontPage is turned off
Maxftp is unlimted
The number of databases is unlimited
The number of email accounts they can create is unlimited
The amount of mailing lists is unlimited
The number of subdomains is unlimited
Bandwidth is unlimited.
Related
Is it possible to split a file into multiple gzip files in one line?
Lets say I have a very large file data.txt containing
A somedata 1
B somedata 1
A somedata 2
C somedata 1
B somedata 2
I would like to split each into separate directory of gz files.
For example, if I didnt care about separating, I would do
cat data.txt | gzip -5 -c | split -d -a 3 -b 100000000 - one_dir/one_dir.gz.
And this will generate gz files of 100MB chunks under one_dir directory.
But what I want is separating each based on the first column. So I would like to have say 3 different directory, containing gz files of 100MB chunks for A, B and C respectively.
So the final directory will look like
A/
A.gz.000
A.gz.001
...
B/
B.gz.000
B.gz.001
...
C/
C.gz.000
C.gz.001
...
Can I do this in a 1 liner using cat/awk/gzip/split? Can I also have it create the directory (if it doesnt exist yet)
With awk:
awk '
!d[$1]++ {
system("mkdir -p "$1)
c[$1] = "gzip -5 -c|split -d -a 3 -b 100000000 - "$1"/"$1".gz."
}
{ print | c[$1] }
' data.txt
Assumes:
sufficiently few distinct $1 (there is an implementation-specific limit on how many pipes can be active simultaneously - eg. popen() on my machine seems to allow 1020 pipes per process)
no problematic characters in $1
Incorporating improvements suggested by #EdMorton:
If you have a sort that supports -s (so-called "stable sort"), you can remove the first limit above as only a single pipe will need to be active.
You can remove the second limit by suitable testing and quoting before you use $1. In particular, unescaped single-quotes will interfere with quoting in the constructed command; and forward-slash is not valid in a filename. (NUL (\0) is not allowed in a filename either but should never appear in a text file.)
sort -s -k1,1 data.txt | awk '
$1 ~ "/" {
print "Warning: unsafe character(s). Ignoring line",FNR >"/dev/stderr"
next
}
$1 != prev {
close(cmd)
prev = $1
# escape single-quote (\047) for use below
s = $1
gsub(/\047/,"\047\\\047\047",s)
system("mkdir -p -- \047"s"\047")
cmd = "gzip -5 -c|split -d -a 3 -b 100000000 -- - \047"s"/"s".gz.\047"
}
{ print | cmd }
'
Note that the code above still has gotchas:
for a path d1/d2/f:
the total length can't exceed getconf PATH_MAX d1/d2; and
the name part (f) can't exceed getconf NAME_MAX d1/d2
Hitting the NAME_MAX limit can be surprisingly easy: for example copying files onto an eCryptfs filesystem could reduce the limit from 255 to 143 characters.
I'm writing a simple program to add a contact into a file called "phonebook", but if the contact already exists, i want it to return an echo saying " (first name last name) already exists", and not add it to the file. So far, i've gotten the program to add the user, but it wont return that echo and adds the duplicate entry anyway. How can i fix this?
#!/bin/bash
# Check that 5 arguments are passed
#
if [ "$#" -ne 5 ]
then
echo
echo "Usage: first_name last_name phone_no room_no building"
echo
exit 1
fi
first=$1
last=$2
phone=$3
room=$4
building=$5
# Count the number of times the input name is in add_phonebook
count=$( grep -i "^$last:$first:" add_phonebook | wc -l )
#echo $count
# Check that the name is in the phonebook
if [ "$count" -eq 1 ]
then
echo
echo "$first $last is already in the phonebook."
echo
exit 1
fi
# Add someone to the phone book
#
echo "$1 $2 $3 $4 $5" >> add_phonebook
# Exit Successfully
exit 0
Couple of things:
Should check if add_phonebook file exists before attempting to grep it, otherwise you get the grep: add_phonebook: No such file or directory output.
Your grep expression doesn't match the format of the file.
You are saving the file with space in between the fields, but searching with a colon(:) between the names. You can either update the file format to use a colon to separate the fields, or update the grep expression to search on space. In addition, you save first name, last_name, but search on last_name, first_name.
With space format:
count=$( grep -i "^$last[[:space:]]\+$first[[:space:]]" add_phonebook | wc -l )
Removed my tab separators from the echo line, used spaces, and now it can count properly
Updated :
Initial issue :
Having a while read loop printing every line that is read
Answer : Put a done <<< "$var"
Subsequent issue :
I may need some explanations about some SHELL code :
I have this :
temp_ip=$($mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
That gets results looking like this :
<ip1> <site1>
<ip2> <site2>
<ip3> <site3>
<ip4> <site4>
up to 5000 ip_address
I did a "while loop" :
while [ `find $proc_dir -name snmpproc* | wc -l` -ge "$max_proc_snmpget" ];do
{
echo "sleeping, fping in progress";
sleep 1;
}
done
temp_ip=$($mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
while read ip codesite;do
{
sendSNMPGET $ip $snmp_community $code_site &
}
done<<<"$temp_ip"
And the sendSNMPGET function is :
sendSNMPGET() {
touch $procdir/snmpproc.$$
hostname=`snmpget -v1 -c $2 $1 sysName.0`
if [ "$hostname" != "" ]
then
echo "hi test"
fi
rm -f $procdir/snmpproc.$$
The $max_proc_snmpget is set to 30
At the execution, the read is ok, no more printing on screen, but child processes seems to be disoriented
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
Why can't it handle this ?
If temp_ip contains the name of a file that you want to read, then use:
done<"$temp_ip"
In your case, it appears that temp_ip is not a file name but contains the actual data that you want. In that case, use:
done<<<"$temp_ip"
Take care that the variable is placed inside double-quotes. That protects the data against the shell's word splitting which would result in the replacement of new line characters with spaces.
More details
In bash, an expression like <"$temp_ip" is called redirection. In this case in means that the while loop will get its standard input from the file called $temp_ip.
The expression <<<"$temp_ip" is called a here string. In this case, it means that the while loop will get its standard input from the data in the variable $temp_ip.
More information on both redirection and here strings in man bash.
Or you can parse the output of your initial command directly:
$mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table) | \
while read ip codesite
do
...
done
If you want to improve the performance and run some of the 5,000 SNMPGETs in parallel, I would recommend using GNU Parallel (here) like this:
$mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table) | parallel -k -j 20 -N 2 sendSNMPGET {1} $snmp_community {2}
The -k will keep the parallel output in order. The -j 20 will run up to 20 SNMPGETs in parallel at a time. The -N 2 means take 2 parameters from the mysql output per job (i.e. ip and codesite). {1} and {2} are your ip and codesite parameters.
http://www.gnu.org/software/parallel/
I propose to not store the result value but use it directly:
while read ip codesite
do
sendSNMPGET "$ip" "$snmp_community" "$code_site" &
done < <(
"$mysql" --skip-column-names -h "$db_address" -u "$db_user" -p"$db_passwd" "$db_name" \
-e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
This way you start the mysql command in a subshell and use its output as input to the while loop (similar to piping which here also is an option).
But I see some problems with that code: If you really start each sendSNMPGET command in the background, you very quickly will put a massive load on your computer. For each line you read another active background process is started. This can slow down your machine to the point where it is rendered useless.
I propose to not run more than 20 background processes at a time.
As you don't seem to have liked my answer with GNU Parallel, I'll show you a very simplistic way of doing it in parallel without needing to install that...
#!/bin/bash
MAX=8
j=0
while read ip code
do
(sleep 5; echo $ip $code) & # Replace this with your SNMPGET
((j++))
if [ $j -eq $MAX ]; then
echo -n Pausing with $MAX processes...
j=0
wait
fi
done < file
wait
This starts up to 8 processes (you can change it) and then waits for them to complete before starting another 8. You have already been shown how to feed your mysql stuff into the loop by other respondents in the second to last line of the script...
The key to this is the wait which will wait for all started processes to complete.
Good day! I have a standard Apache log file, and I'd like to be able to extract a list of downloaded .m4a files in a specific directory, along with a count of how many times each was downloaded. I know how to do this for a single file, with:
grep filename.txt logfile | grep " 200" | wc -l
But that just gives me a single number, and I need to know each file name ahead of time.
What I'd like to get out is a sorted list of download counts and file names something along these lines:
650 /podcasts/12323.m4a
623 /podcasts/12329.m4a
601 /podcasts/12329.m4a
432 /podcasts/11521.m4a
And so on... Thanks!
cheers... -Adam
Try with
cat access.log | awk '$9==200 { print $7 }' | sort | uniq -c | sort -n
(where the file name is in the 7th and 200 in the 9th position of the log file)
I'm doing something like
zgrep "somepattern" access_log.X.gz
But I find that a lot of the entries are from the same IP and I'd like to count them as one.
I would use something like
zgrep "somepattern" access_log.X.gz | awk '{print $3}' | sort -u | wc -l
awk is to print out the field that contains the client IP address (I'm assuming it's the third field here, but adjust the number to match your log format), then sort -u sorts the IP addresses and removes duplicates, then wc -l counts the number of lines.