How to make a variable out of the output of a unix command? - variables

I'm trying to have a variable $totalLines that stores the total lines in a file (given as input $1).
I'm trying to do something like this:
totalLines= grep -c *.* $1
But Unix doesn't like that.
I've tried enclosing it in paranthesis, square brackets, and (), but that doesn't work either. This has got to be super simple but I'm searching for the answer around the web and not finding a page or forum that clearly states it.
Sorry to trouble you guys with such an easy one.

There are two ways to achieve it:
totalLines=$(grep -c *.* $1)
or
totalLines=`grep -c *.* $1`

Like:
totalLines=$(grep -c *.* $1)

Related

Converting linux commands to URI/CGI encoded. A better way?

I am testing some PHP apps for injectable commands. I have to convert my commands to a URI/CGI encoded format. I am wondering if there is a better way to do it.
When I want to include a ping (to test if the app is, in fact, executing from an injection) I am converting it as follows.
hURL -X --esc ";ping localhost -c 1" | sed -e ‘s/\\x/\%/g’
Here is the output.
%3b%20%70%69%6e%67%20%6c%6f%63%61%6c%68%6f%73%74%20%2d%63%20%31
Works perfect. The code is injected and logs are showing it being handled as expected.
QUESTION: Is there a better way to convert to the above. I think I am over complicating things.
You could possibly use an out-of-the-box library for doing the escaping, may be a little easier on the eye ...
$ echo ';ping localhost -c 1' | perl -ne 'use URI::Escape; print(uri_escape($_) . "\n");'
%3Bping%20localhost%20-c%201%0A
Obviously this output does not escape legitimate url chars so not sure this entirely answers your question ...

How to extract the strings in double quotes for localization

I'm trying to extract the strings for localization. There are so many files where some of the strings are tagged as NSLocalizedStrings, and some of them are not.
I'm able to grab the NSLocalizedStrings using ibtool and genstrings, but I'm unable to extract the plain strings without NSLocalizedString.
I'm not good at regex, but I came up with this "[^(]#\""
and with the help of grep:
grep -i -r -I "[^(]#\"" * > out.txt
It worked, and all the strings were actually grabbed into a txt file, but the problem is ,
if in my code there is a line:
..... initWithTitle:#"New Sketch".....
I only expect the grep to grab the #"New Sketch" part, but it grabs the whole line.
So in the out.txt file, I see initWithTitle:#"New Sketch", along with some unwanted lines.
How can I write the regex to grab only the strings in double quotes ?
I tried the grep command with the regex mentioned in here, but it gave me syntax error .
For ex, I tried:
grep -i -r -I (["'])(?:(?=(\\?))\2.)*?\1 * > out.txt
and it gave me
-bash: syntax error near unexpected token `('
In xcode, open your project. Go to Editor->Export For Localization...It will create the folder of files. Everything that was marked for localization will be extracted there. No need to parse it yourself. It will be in the XML format.
If you wanna go hard way, you can then parse those files the way you're trying to do it now ?! It will also have Storyboard strings there too, btw.

Split a batch of text files using pattern

I have a directory of almost a thousand html files. Each file needs to be split up into multiple text files, based on a recurring pattern (a heading). I am on a windows machine, using GnuWin32 tools.
I've found a way to do this, for a single file:
csplit 1.html -b "%04d.txt" /"Words in heading"/ {*}
But I don't know how to repeat this operation over the entire set of HTML files. This:
csplit *.html -b "%04d.txt" /"Words in heading"/ {*}
doesn't work, and neither does this:
for %i in (*.html) do csplit *.html -b "%04d.txt" /"Words in heading"/ {*}
Both result in an invalid pattern error. Help would be much appreciated!
The options/arguments order is important with csplit. And it won’t accept multiple files. It’s help gets you there:
% csplit --help
Usage: csplit [OPTION]... FILE PATTERN...
I’m surprised your first example works for the single file. It really should be changed to:
% csplit -b "%04d.txt" 1.html "/Words in heading/" "{*}"
^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
OPTS/ARGS FILE PATTERNS
Notice also that I changed your your quoting to be around the arguments. You probably also need to have quoted your last "{*}".
I’m not sure what shell you’re using, but if that for-loop syntax is appropriate, then the fixed command should work in the loop.

ssh tail output lines only with keyword

I'm trying to tail a large file in an ssh command prompt, but I need to filter it so it only displays lines that contain a particular keyword in them.
I'm using this command currently to tail.
# tail /usr/local/apache/logs/access_log
If possible please let me know what I would add to this command to accomplish this.
You can pipe the output of tail and use grep. To
filter so it only displays lines that contain a particular keyword in them
you could do:
tail /usr/local/apache/logs/access_log | grep "keyword"
where you'd replace keyword with your keyword.

Apache grep big log file

I need to parse Apache log file to look for specific suspicious patterns (like SQL injections).
For example I'm looking for id='%20or%201=1;
I am using grep to check the log file for this pattern (and others) and because these logs are huge it takes a long amount of time
Here my command:
grep 'id=' Apache.log | egrep "' or|'%20"
Is there a better or a faster method or command I need use to make the search faster?
For starters, you don't need to pipe your grep output to egrep. egrep provides a superset of grep's regular expression parsing, so you can just do this:
egrep "id='( or|%20)'" apache.log
Calling egrep is identical to calling grep -E.
That may get you a little performance increase. If you can look for fixed strings rather than regular expressions, that might also help. You can tell grep to look for a fixed string with the -F option:
grep -F "id='%20or" apache.log
But using fixed strings you lose a lot of flexibility.
I assume most of your time is spent while getting the data from disk (CPU usage is not maxed out). Then you can't optimize a query. You could try to only log the interesting lines in a seperate file though....
Are you looking for grep -E "id=(' or|'%20)" apache.log ?