Know how many lines of code you have in an Xcode project? - objective-c

I've got quite a big project and eventually I finished it. I'm just curious to know how many lines of code there are altogether in my project. I'm using Xcode 3.
So can you actually find out how many lines of code have been compiled?

Open up Terminal.app, go into your project's root directory, and run this command:
For Swift only:
find . \( -iname \*.swift \) -exec wc -l '{}' \+
For Obj-C only:
find . \( -iname \*.m -o -iname \*.mm -o -iname \*.h \) -exec wc -l '{}' \+
For Obj-C + Swift:
find . \( -iname \*.m -o -iname \*.mm -o -iname \*.h -o -iname \*.swift \) -exec wc -l '{}' \+
For Obj-C + Swift + C + C++:
find . \( -iname \*.m -o -iname \*.mm -o -iname \*.c -o -iname \*.cc -o -iname \*.h -o -iname \*.hh -o -iname \*.hpp -o -iname \*.cpp -o -iname \*.swift \) -exec wc -l '{}' \+
Terminal quick tips:
ls: list directory contents
cd: change directory
Press tab to autocomplete
Remember to put "\" backslash before spaces
I suggest going one folder down from the main project so you get rid of code count from the frameworks

Open up Terminal.app, cd into your project's root directory, and run this command:
find . \( -iname \*.m -o -iname \*.mm -o -iname \*.c -o -iname \*.cc -o -iname \*.h \) -exec wc -l '{}' \+
If you other file types you also want to include in your count, then add more -o \*.ext clauses.

You can use sloccount or cloc to do this, they are both compatible with Objective-C code.
I recommend using sloccount, you can get a nice HTML report if you also use Jenkins. The HTML report will enable you to drill down to the different directories and files.
This is a command line for just having an overview of your code, if you are in the root dir of your Xcode project:
sloccount --duplicates --wide YOUR-TARGET-NAME
And if you want to generate a report to use in Jenkins, just add the --details flag:
sloccount --duplicates --wide --details YOUR-TARGET-NAME > build/sloccount.sc
and install the Jenkins plugin for sloccount via Jenkins UI.
You will be able to see examples of such reports in Jenkins in this blog article (disclaimer: I am the author): http://blog.octo.com/en/jenkins-quality-dashboard-ios-development/#step1-1.

One way is to load a copy into Xcode and use "Project Analyzer for Xcode 4". Search for "Xcode" in the Apple Mac App Store. I have not used this program but I happened to see it yesterday when I was searching for Xcode related apps in the Mac App Store.
Hope that helps.

A bash script which finds line count of files WITHOUT COMMENTS AND EMPTY LINES. The script doesn't count comments which start with //. Copy the script to your project folder and run by sh scriptname.sh.
For swift change \( -iname \*.m -o -iname \*.mm -o -iname \*.h \) to \( -iname \*.swift \)
# $excluded is a regex for paths to exclude from line counting
excluded="some_dir_name\|some_dir_name2\|lib\|thisfile"
countLines(){
# $total is the total lines of code counted
total=0
# -mindepth exclues the current directory (".")
for file in `find . -mindepth 1 \( -iname \*.m -o -iname \*.mm -o -iname \*.h \) | grep -v "$excluded"`; do
# First sed: only count lines of code that are not commented with //
# Second sed: don't count blank lines
# $numLines is the lines of code
numLines=`cat $file | sed '/\/\//d' | sed '/^\s*$/d' | wc -l`
# To exclude only blank lines and count comment lines, uncomment this:
#numLines=`cat $file | sed '/^\s*$/d' | wc -l`
total=$(($total + $numLines))
echo " " $numLines $file
done
echo " " $total in total
}
countLines

I'm not sure about any tools that plug into Xcode directly (why are you still using Xcode 3 when 4.1 is freely available on Lion?), but I find that the command-line cloc tool works well with Objective-C code.

A really nice unix command is xargs. See the "pipe to xargs below". For example:
find . \( -iname \*.m -o -iname \*.mm -o -iname \*.h -o -iname \*.swift \) | xargs wc -l
Oddly, I'll have to figure out though why this answer comes out a tiny bit lower for me than the answer from #Esqarrouth

Related

find: missing argument to `-exec' in SSH command

I'm using SSH inside a CI/CD Pipeline (so it's non-interactive), and trying to execute a couple find command (among others) to change the ownership of files and directories after executing LFTP mirror, but I keep getting this error (which makes the whole Pipeline fail):
find: missing argument to `-exec'
This is the command that uses find:
ssh -i ~/.ssh/id_rsa $USERNAME#$HOST "[other commands...]; find $SOME_PATH/ -type d -exec 'chmod 755 {} \;' && find $SOME_PATH/ -type f -exec 'chmod 644 {} \;' && echo Done"
I've already tried using escaped double quotes like so: -exec \"chmod 755 {} \;\" - but keeps throwing the same error.
What would be the main issue here?
EDIT: Solved. I removed any quotes for the -exec, removed the && and append an extra semicolon ; to each find and it works as expected.
ssh -i ~/.ssh/id_rsa $USERNAME#$HOST "[other commands...]; find $SOME_PATH/ -type d -exec chmod 755 {} \;; find $SOME_PATH/ -type f -exec chmod 644 {} \;; echo Done"
So use -exec whatever-command {} \;; [other command, echo, find, ls, whatever...].
Please check this answer for more information: https://unix.stackexchange.com/a/139800/291364
[...] When find sees that spurious exit after the -exec … ; directive, it doesn't know what to do with it; it hazards a (wrong) guess that you meant it to be a path to traverse. You need a command separator: put another ; after \; (with or without a space before). [...]
\; is processed to ; locally before the string is passed to the remote shell. You need to escape the backslash so the the ; remains escaped on the remote end.
ssh -i ~/.ssh/id_rsa $USERNAME#$HOST \
"[other commands...]; find $SOME_PATH/ -type d -exec 'chmod 755 {} \\;'
&& find $SOME_PATH/ -type f -exec 'chmod 644 {} \\;' && echo Done"
A better idea would be to use single quotes for the command argument and pass the value of $SOME_PATH as an argument to a shell.
ssh -i ~/.ssh/id_rsa $USERNAME#$HOST \
sh -c '...;
find "$1" -type d -exec chmod 755 {} \; &&
find "$1" -type f -exec chmod 644 {} \; &&
echo Done' _ "$SOME_PATH"
Note that chmod and its arguments each need to be separate arguments to the find.
In fact, you don't need to run find twice; you can provide two -exec primaries, each paired to a different -type:
ssh -i ~/.ssh/id_rsa $USERNAME#$HOST \
sh -c '...;
find "$1" \( -type d -exec chmod 755 {} \; \) -o
\( -type f -exec chmod 644 {} \; \)
&& echo Done' _ "$SOME_PATH"
Rather than the complex find commands (and associated quoting/escaping mess), you can use the built-in capabilities of chmod sympolic modes to set the permissions on files and directores differently. Specifically, the "X" permission means essentially "execute if it makes sense", which mostly means directories rather than files. The main exception is that if there's a file that already has execute set, it assumes it's intentional and keeps it. If that's ok, you can use this simpler command:
chmod -R u=rwX,go=rX "$1" # Set non-executable files to 644, executables and directories to 755
If you need to specifically clear execute bits on files, or just want to stick with find, another option take advantage of the fact that chmod accepts multiple arguments to use find ... -exec ... {} + instead of the \; version. "+" isn't a shell metacharacter, so it doesn't reqire special treatment:
find $SOME_PATH/ -type d -exec chmod 755 {} + ; find $SOME_PATH/ -type f -exec chmod 644 {} +

Find and grep to return files without string

I have the following find, sed, and grep command.
find . -type d \( -name ThirdParty -o -name 3rdParty -o -name 3rd_party \) -prune -o -type f \( -name "*.bat" \) -exec grep -L 'FOO' {} \; -print0 | xargs -0 sed -i -e '1iFOO'
I want to be able to find all .bat files in a directory that do NOT contain the string "FOO", and pipe them to a sed command that adds the string "FOO" to the top of the file. However, when i run the find and grep portion of my command (Without sed):
find . -type d \( -name ThirdParty -o -name 3rdParty -o -name 3rd_party \) -prune -o -type f \( -name "*.bat" \) -exec grep -L 'FOO' {} \; -print
it returns ALL bat files, even ones containing the string 'FOO'. This leads me to believe that the grep command is faulty. how can i fix this? thank you
Change:
grep -L 'FOO'
to:
awk '/FOO/{exit 1}'
so you only get the file name printed if FOO is not present in the file.
To be clear the whole command line would then be:
find . -type d \( -name ThirdParty -o -name 3rdParty -o -name 3rd_party \) \
-prune -o -type f \( -name "*.bat" \) \
-exec awk '/FOO/{exit 1}' {} \; -print0 |
xargs -0 sed -i -e '1iFOO'
The -exec argument to find specifies a test: if it returns 0, the result is included; if it returns nonzero, the result is excluded. grep -L returns 0 regardless of whether it finds results, so it's equivalent to -exec true \; (or not passing -exec at all).
You can achieve the desired result by passing the output from your find command into grep via xargs: find $find_args | xargs grep -L FOO.
grep is printing the names of files that do not match (with newlines rather than nulls as a terminator). You need to suppress the output of grep. Perhaps:
find . -type d \( -name ThirdParty -o -name 3rdParty -o -name 3rd_party \) \
-prune -o -type f \( -name "*.bat" \) \
-exec sh -c '! grep -q "FOO" $1' _ {} \; -print0
But it seems a bit cleaner to write:
find . -type d \( -name ThirdParty -o -name 3rdParty -o -name 3rd_party \) \
-prune -o -type f \( -name "*.bat" \) \
-not -exec grep -q "FOO" {} \; -print0

Combine two commands using GNU parallel for OCR project

I would like to write a script which runs a command to OCR pdfs, which deletes the resulting images, after the text files has been written.
The two commands I want to combine are the following.
This command create folders, extract pgm from each PDF and adds them into each folder:
time find . -name \*.pdf | parallel -j 4 --progress 'mkdir -p {.} && gs -dQUIET -dINTERPOLATE -dSAFER -dBATCH -dNOPAUSE -dPDFSETTINGS=/screen -dNumRenderingThreads=4 -sDEVICE=pgmraw -r300 -dTextAlphaBits=4 -sProcessColorModel=DeviceGray -sColorConversionStrategy=Gray -dOverrideICC -o {.}/{.}-%03d.pgm {}'
This commands does the OCR and deletes the resulting images (pgm):
time find . -name \*.pgm | parallel -j 4 --progress 'tesseract {} {.} -l deu_frak && rm {.}.pgm'
I would like to combine both commands so that the script deletes the pgm images after each OCR. If I run the above commands, the first will extract images and will eat up my disk space, then the second command would do the OCR and only after that delete the images as a last step.
So,
Create folder
Extract PGM from PDF
OCR from PGM to txt
Delete PGM images, which just have been used (missing)
Basically, I would like this 4 steps to be done in this order for each PDF separated and not for all PDF at once. How can I do this?
Edit:
My first attempt to solve my issues was to create the following command:
time find . -name \*.pdf | parallel -j 4 -m --progress --eta 'mkdir -p {.} && gs -dQUIET -dINTERPOLATE -dSAFER -dBATCH -dNOPAUSE -dPDFSETTINGS=/screen -dNumRenderingThreads=4 -sDEVICE=pgmraw -r300 -dTextAlphaBits=4 -sProcessColorModel=DeviceGray -sColorConversionStrategy=Gray -dOverrideICC -o {.}/{.}-%03d.pgm {}' && time find . -name \*.pgm | parallel -j 4 --progress --eta 'tesseract {} {.} -l deu_frak && rm {.}.pgm'
However, tesseract would not find the language package.
Updated Answer
I have not tested this please run it on a copy of a small subset of your files. You can turn off the messages with DEBUG: at the start if you are happy it looks good:
#!/bin/bash
# Declare a function for "parallel" to call
doit() {
# Get name of PDF with and without extension
withext="$1"
noext="$2"
echo "DEBUG: Processing $withext into $noext"
# Make output directory
mkdir -p "$noext"
# Extract as PGM into subdirectory
gs ... -o "$noext"/"${noext}-%03d.pgm $withext"
# Go to target directory or die with error message
cd "$noext" || { echo ERROR: Failed to cd to $noext ; exit 1; }
# OCR and remove each PGM
n=0
for f in *pgm; do
echo "DEBUG: OCR $f into $n"
tesseract "$f" "$n" -l deu_frak
echo "DEBUG: Remove $f"
rm "$f"
((n=n+1))
done
}
# Ensure the function is exported to subshells
export -f doit
find . -name \*.pdf -print0 | parallel -0 doit {} {.}
You should be able to test the doit() function without parallel by running:
doit someFile.pdf someFile
Original Answer
If you want to do lots of things for each argument in GNU Parallel, the simplest way is to declare a bash function and then call that.
It looks like this:
# Declare a function for "parallel" to call
doit() {
echo "$1" "$2"
# mkdir something
# extract PGM
# do OCR
# delete PGM
}
# Ensure the function is exported to subshells
export -f doit
find some files -print0 | parallel -0 doit {} {.}

How to locate code in PHP inside a directory and edit it

I've been having problems with multiple hidden infected PHP files which are encrypted (ClamAV can't see them) in my server.
I would like to know how can you run an SSH command that can search all the infected files and edit them.
Up until now I have located them by the file contents like this:
find /home/***/public_html/ -exec grep -l '$tnawdjmoxr' {} \;
Note: $tnawdjmoxr is a piece of the code
How do you locate and remove this code inside all PHP files in the directory /public_html/?
You can add xargs and sed:
find /home/***/public_html/ -exec grep -l '$tnawdjmoxr' {} \; | xargs -d '\n' -n 100 sed -i 's|\$tnawdjmoxr||g' --
You may also use sed immediately than using grep -but- it can alter the modification time of that file and may also give some unexpected modifications like perhaps some line endings, etc.
-d '\n' makes it sure that every argument is read line by line. It's helpful if filenames has spaces on it.
-n 100 limits the number of files that sed would process in one instance.
-- makes sed recognize filenames starting with a dash. It's also commendable that grep would have it: grep -l -e '$tnawdjmoxr' -- {} \;
File searching may be faster with grep -F.
sed -i enables inline editing.
Besides using xargs it would also be possible to use Bash:
find /home/***/public_html/ -exec grep -l '$tnawdjmoxr' {} \; | while IFS= read -r FILE; do sed -i 's|\$tnawdjmoxr||g' -- "$FILE"; done
while IFS= read -r FILE; do sed -i 's|\$tnawdjmoxr||g' -- "$FILE"; done < <(exec find /home/***/public_html/ -exec grep -l '$tnawdjmoxr' {} \;)
readarray -t FILES < <(exec find /home/***/public_html/ -exec grep -l '$tnawdjmoxr' {} \;)
sed -i 's|\$tnawdjmoxr||g' -- "${FILES[#]}"

Ask the compiler to ignore #pragma message

As said in the title, I want the compiler to ignore pragma message for the time being, so it's easier for me to read and fix actual warnings. I've done some searching, but there doesn't seem to be any information on it.
No it isn't possible, so the best thing to do would be to mass-edit all the #pragmas out:
$ cd MySourceFolder
$ find . -name \*.m -exec perl -p -i -n -e 's/^#pragma/\/\/#pragma/' {} \;
When you want the #pragma's back again:
$ cd MySourceFolder
$ find . -name \*.m -exec perl -p -i -n -e 's/^\/\/#pragma/#pragma/' {} \;
If you do this kind of thing alot, I would wrap that in a script, and put it into your ~/bin directory.