Brand new to mSL and just playing around with trying to make a bot.
So I am trying to make something that when a user says a certain word, they get +1 to a count against their name. However they can then not say the word again to further increase their count unlimited times, they must find a new word.
To ensure the words cannot be used multiple times I am writing the words to a file, I then need to load these words and check if its been said already or not against what the user has just said, and act appropriately
on *:TEXT:&:#:{
var %msg
if ($1 == text1) { %msg = msg1 }
elseif ($1 == text2) { %msg = msg2 }
elseif ($1 == text3) { %msg = msg3 }
else { return }
msg # %msg
var %keyword = $readini(keyword.ini,#,$nick)
if (%keyword == $1) {
msg # you already have this keyword! :(
}
else {
var %count = $readini(cookies.ini,#,$nick)
inc %count
writeini cookies.ini # $nick %count
writeini keyword.ini # $nick %keyword $+ , $+ $1
}
}
keyword.ini file looks like:
nickname=text1,text2
is there anyway in mSL that I can grab the list (already done in code above) and then use something similar to .split(,) to divide up the words to run through a for/next?
Thanks in advance
EDIT:
I tried out the below and although it did work! I then deleted the file to test it out, and it never remade the file despite the writeini. I even added a writeini keyword.ini at the start of the script to make sure the file is present before any text is written, still didn't make it.
on *:TEXT:&:#:{
var %msg
if ($1 == text1) { %msg = msg1 }
elseif ($1 == text2) { %msg = msg2 }
elseif ($1 == text3) { %msg = msg3 }
else { return }
msg # %msg
var %i = 1, %keyword = $readini(keyword.ini,n,$chan,$nick), %cookie = $readini(cookies.ini,n,#,$nick)
while (%i <= $numtok(%keyword, 44)) {
if ($istok(%keyword, $1, 44)) {
msg # you already have this keyword! :(
}
else {
inc %cookie
writeini cookies.ini $chan $nick %cookie
msg # congrats! you found a keywords
writeini keyword.ini $chan $nick $addtok(%keyword, $1, 44)
}
inc %i
}
You're looking for mIRC's token identifiers. I would suggest reading the help files (/help token identifiers) to read more about them.
Use $istok() to check whether the line contains that keyword already:
if ($istok(%keyword, $1, 44)) // Keyword exists
Use $addtok() to add a new keyword to the line, and then write it to the file:
writeini keyword.ini # $nick $addtok(%keyword, $1, 44)
Use $numtok() and $gettok() to create a loop to read all the values:
var %i = 1, %keywords = $readini(cookies.ini, n, channel, nick)
while (%i <= $numtok(%keywords, 44)) {
echo -a Keyword %i $+ : $gettok(%keywords, %i, 44)
inc %i
}
Important note: always use the n switch with $readini() (like I did above) when reading data, especially when it's data that users can enter. Without it, $readini() will evaluate the contents (e.g., $me will be evaluated to your current nickname). Users can inject malicious code this way.
Edit for the inserted question: You are using a while loop to check whether they possess the cookie - it will loop once for every cookie they have (0 loops for no cookies). You don't need this while loop at all, since $istok(%keywords $1, 44) will take all the keywords and return $true if $1 is in that list of tokens.
Just the following will suffice:
var %keywords = $readini(keyword.ini,n,$chan,$nick), %cookie = $readini(cookies.ini,n,#,$nick)
if ($istok(%keywords, $1, 44)) {
; the token $1 is in the list of tokens %keywords
msg # you already have this cookie! :(
}
else {
; the token $1 did not appear in the list of tokens %keywords
inc %cookie
writeini cookies.ini $chan $nick %cookie
writeini keyword.ini $chan $nick $addtok(%keywords, $1, 44)
}
Related
I have a file called domain which contains some domains. For example:
google.com
facebook.com
...
yahoo.com
And I have another file called site which contains some sites URLs and numbers. For example:
image.google.com 10
map.google.com 8
...
photo.facebook.com 22
game.facebook.com 15
..
Now I'm going to count the url number each domain has. For example: google.com has 10+8. So I wrote an awk script like this:
BEGIN{
while(getline dom < "./domain" > 0) {
domain[dom]=0;
}
for(dom in domain) {
while(getline < "./site" > 0) {
if($1 ~/$dom$) #if $1 end with $dom {
domain[dom]+=$2;
}
}
}
}
But the code if($1 ~/$dom$) doesn't run like I want. Because the variable $dom in the regular expression was explained literally. So, the first question is:
Is there any way to use variable $dom in a regular expression?
Then, as I'm new to writing script
Is there any better way to solve the problem I have?
awk can match against a variable if you don't use the // regex markers.
if ( $0 ~ regex ){ print $0; }
In this case, build up the required regex as a string
regex = dom"$"
Then match against the regex variable
if ( $1 ~ regex ) {
domain[dom]+=$2;
}
First of all, the variable is dom not $dom -- consider $ as an operator to extract the value of the column number stored in the variable dom
Secondly, awk will not interpolate what's between // -- that is just a string in there.
You want the match() function where the 2nd argument can be a string that is treated as the regular expression:
if (match($1, dom "$")) {...}
I would code a solution like:
awk '
FNR == NR {domain[$1] = 0; next}
{
for (dom in domain) {
if (match($1, dom "$")) {
domain[dom] += $2
break
}
}
}
END {for (dom in domain) {print dom, domain[dom]}}
' domain site
One way using an awk script:
BEGIN {
FS = "[. ]"
OFS = "."
}
FNR == NR {
domain[$1] = $0
next
}
FNR < NR {
if ($2 in domain) {
for ( i = 2; i < NF; i++ ) {
if ($i != "") {
line = (line ? line OFS : "") $i
}
}
total[line] += $NF
line = ""
}
}
END {
for (i in total) {
printf "%s\t%s\n", i, total[i]
}
}
Run like:
awk -f script.awk domain.txt site.txt
Results:
facebook.com 37
google.com 18
You clearly want to read the site file once, not once per entry in domain. Fixing that, though, is trivial.
Equally, variables in awk (other than fields $0 .. $9, etc) are not prefixed with $. In particular, $dom is the field number identified by the variable dom (typically, that's going to be 0 since domain strings don't convert to any other number).
I think you need to find a way to get the domain from the data read from the site file. I'm not sure if you need to deal with sites with country domains such as bbc.co.uk as well as sites in the GTLDs (google.com etc). Assuming you are not dealing with country domains, you can use this:
BEGIN {
while (getline dom < "./domain" > 0) domain[dom] = 0
FS = "[ .]+"
while (getline < "./site" > 0)
{
topdom = $(NF-2) "." $(NF-1)
domain[topdom] += $NF
}
for (dom in domain) print dom " " domain[dom]
}
In the second while loop, there are NF fields; $NF contains the count, and $1 .. $(NF-1) contain components of the domain. So, topdom ends up containing the top domain name, which is then used to index into the array initialized in the first loop.
Given the data in the question (minus the lines of dots), the output is:
yahoo.com 0
facebook.com 37
google.com 18
The problem of the answers above is that you cannot use the "metacharacters" (e.g. \< for a word boundary at the beginning of a word) if you use a string instead of a regular expression /.../.
If you had a domain xyz.com and two sites ab.xyz.com and cd.prefix_xyz.com, the numbers of the two site entries would be added to xyz.com
Here's a solution using awk's pipe and the sed command:
...
for(dom in domain) {
while(getline < "./site" > 0) {
# let sed replaces occurence of the domain at the end of the site
cmd = "echo '" $1 "' | sed 's/\\<'" dom "'$/NO_VALID_DOM/'"
cmd | getline x
close(cmd)
if (match(x, "NO_VALID_DOM")) {
domain[dom]+=$2;
}
}
close("./site") # this misses in original code
}
...
I have a dictionary dict with records separated by ":" and data fields by new lines, for example:
:one
1
:two
2
:three
3
:four
4
Now I want awk to substitute all occurrences of each record in the input
file, eg
onetwotwotwoone
two
threetwoone
four
My first awk script looked like this and works just fine:
BEGIN { RS = ":" ; FS = "\n"}
NR == FNR {
rep[$1] = $2
next
}
{
for (key in rep)
grub(key,rep[key])
print
}
giving me:
12221
2
321
4
Unfortunately another dict file contains some character used by regular expressions, so I have to substitute escape characters in my script. By moving key and rep[key] into a string (which can then be parsed for escape characters), the script will only substitute the second record in the dict. Why? And how to solve?
Here's the current second part of the script:
{
for (key in rep)
orig=key
trans=rep[key]
gsub(/[\]\[^$.*?+{}\\()|]/, "\\\\&", orig)
gsub(orig,trans)
print
}
All scripts are run by awk -f translate.awk dict input
Thanks in advance!
Your fundamental problem is using strings in regexp and backreference contexts when you don't want them and then trying to escape the metacharacters in your strings to disable the characters that you're enabling by using them in those contexts. If you want strings, use them in string contexts, that's all.
You won't want this:
gsub(regexp,backreference-enabled-string)
You want something more like this:
index(...,string) substr(string)
I think this is what you're trying to do:
$ cat tst.awk
BEGIN { FS = ":" }
NR == FNR {
if ( NR%2 ) {
key = $2
}
else {
rep[key] = $0
}
next
}
{
for ( key in rep ) {
head = ""
tail = $0
while ( start = index(tail,key) ) {
head = head substr(tail,1,start-1) rep[key]
tail = substr(tail,start+length(key))
}
$0 = head tail
}
print
}
$ awk -f tst.awk dict file
12221
2
321
4
Never mind for asking....
Just some missing parentheses...?!
{
for (key in rep)
{
orig=key
trans=rep[key]
gsub(/[\]\[^$.*?+{}\\()|]/, "\\\\&", orig)
gsub(orig,trans)
}
print
}
works like a charm.
I'm trying to see if there is a entry in a ini-file with a user's nick as the key or not. If not; make an entry. If it exists; post a error message.
var %previous = $readini(numbers.ini,Number,$nick)
if(%previous != $null) {
msg $chan $nick , you have already written %previous .
}
else {
writeini numbers.ini Number $nick $2
msg $chan $nick has written $2.
}
What's happening with the script above is that it is never $null, and I can't find anywhere what is returned from $readini if a key is not found.
$ini(numbers.ini, Numbers, $nick) will return number N (indicating that the item is the Nth item in that section) if it exists. If it does not exist, it will return $null.
In your case, you'll want something along the lines of
if ($ini(numbers.ini, Numbers, $nick) != $null) {
msg $chan $nick , you have already written $readini(numbers.ini, Numbers, $nick)
}
else {
writeini numbers.ini Numbers $nick $2
msg $chan $nick has written $2.
}
the script was working. I added some comments and renamed it then submitted it. today my instructor told me it doesnt work and give me the error of awk 1 unexpected character '.'
the script is supposed to read a name in command line and return the student information for the name back.
right now I checked it and surprisingly it gives me the error.
I should run it by the command like this:
scriptName -v name="aname" -f filename
what is this problem and which part of my code make it?
#!/usr/bin/awk
BEGIN{
tmp=name;
nameIsValid;
if (name && tolower(name) eq ~/^[a-z]+$/ )
{
inputName=tolower(name)
nameIsValid++;
}
else
{
print "you have not entered the student name"
printf "Enter the student's name: "
getline inputName < "-"
tmp=inputName;
if (tolower(inputName) eq ~/^[a-z]+$/)
{
tmpName=inputName
nameIsValid++
}
else
{
print "Enter a valid name!"
exit
}
}
inputName=tolower(inputName)
FS=":"
}
{
if($1=="Student Number")
{
split ($0,header,FS)
}
if ($1 ~/^[0-9]+$/ && length($1)==8)
{
split($2,names," ")
if (tolower(names[1]) == inputName || tolower(names[2])==inputName )
{
counter++
for (i=1;i<=NF;i++)
{
printf"%s:%s ",header[i], $i
}
printf "\n"
}
}
}
END{
if (counter == 0 && nameIsValid)
{
printf "There is no record for the %-10s\n" , tmp
}
}
Here are the steps to fix the script:
Get rid of all those spurious NULL statements (trailing semi-colons at the end of lines).
Get rid of the unset variable eq (it is NOT an equality operator!) from all of your comparions.
Cleanup the indenting.
Get rid of that first non-functional nameIsValid; statement.
Change printf "\n" to the simpler print "".
Get rid of the useless ,FS arg to split().
Change name && tolower(name) ~ /^[a-z]+$/ to just the second part of that condition since if that matches then of course name is populated.
Get rid of all of those tolower()s and use character classes instead of explicit a-z ranges.
Get rid of the tmp variable.
Simplify your BEGIN logic.
Get rid of the unnecessary nameIsValid variable completely.
Make the awk body a bit more awk-like
And here's the result (untested since no sample input/output posted):
BEGIN {
if (name !~ /^[[:alpha:]]+$/ ) {
print "you have not entered the student name"
printf "Enter the student's name: "
getline name < "-"
}
if (name ~ /^[[:alpha:]]+$/) {
inputName=tolower(name)
FS=":"
}
else {
print "Enter a valid name!"
exit
}
}
$1=="Student Number" { split ($0,header) }
$1 ~ /^[[:digit:]]+$/ && length($1)==8 {
split(tolower($2),names," ")
if (names[1]==inputName || names[2]==inputName ) {
counter++
for (i=1;i<=NF;i++) {
printf "%s:%s ",header[i], $i
}
print ""
}
}
}
END {
if (counter == 0 && inputName) {
printf "There is no record for the %-10s\n" , name
}
}
I changed the shebang line to:
#!/usr/bin/awk -f
and then in command line didnt use -f. It is working now
Run the script in the following way:
awk -f script_name.awk input_file.txt
This seems to suppress the warnings and errors.
In my case, the problem was resetting the IFS variable to be IFS="," as suggested in this answer for splitting string into an array. So I resetted the IFS variable and got my code to work.
IFS=', '
read -r -a array <<< "$string"
IFS=' ' # reset IFS
Say you have records in a text file which look like this:
header
data1
data2
data3
I would like to delete the whole record if data1 is a given string. I presume this needs awk which I do not know.
Awk can handle these multiline records by setting the record separator to the empty string:
BEGIN { RS = ""; ORS = "\n\n" }
$2 == "some string" { next } # skip this record
{ print } # print (non-skipped) record
You can save this in a file (eg remove.awk) and execute it with awk -f remove.awk data.txt > newdata.txt
This assumes your data is of the format:
header
data
....
header
data
...
If there are no blank lines between the records, you need to manually split the records (this is with 4 lines per record):
{ a[++i] = $0 }
i == 2 && a[i] == "some string" { skip = 1 }
i == 4 && ! skip { for (i = 1; i <= 4; i++) print a[i] }
i == 4 { skip = 0; i = 0 }
without knowing what output you desired and insufficient sample input.
awk 'BEGIN{RS=""}!/data1/' file