CLOC --diff and --exclude-dir don't seem to work together - cloc

I am trying to compare two directories with multiple subfolders. This is my CLOC script:
cloc-1.76.exe --diff test_initial test_latest --timeout 60 --exclude-dir=ZC_DATA --out=results.txt
Both folders have a ZC_DATA directory. In test_initial it is empty, in test_latest has several C and XML files, therefore lots of code to count.
What I am experiencing is that with or without the switch exclude-dir=ZC_DATA I am getting exactly the same results, no difference at all.
I need a way to include or exclude this folder in order to get different results.
Please advise.
Regards,
M.R.

If you do a straight count of one of the input directories, for example,
cloc-1.76.exe --timeout 60 --exclude-dir=ZC_DATA --out=results.txt test_latest
with and without --exclude-dir=ZC_DATA do the counts change? Repeat the two invocations with the second directory, test_initial, and report if there are differences there as well.

I'm trying to execute a cloc command with --diff AND --exclude-list-file and the files including in .clocignore are not ignored in the result.
Here the cmd:
os.system{'cloc --diff {} {} --exclude-list-file=.clocignore --by-file --out={} --csv'.format (directory1, directory2, output.loc)}
.clocignore file content:
/tmp/workspace/directory2/myfile.cpp
NOTE: this particular file (myfile.cpp) appears in directory2 but it does not exist in directory1.
If diff directory1 - directory2 is not successfully because some files in directory1 do not exist, the result are: the lines counted in directory2, which it is fine!
BUT,
it does not exclude the files contained in ..clocignore
why the --exclude-list-file=.clocignore is not working in this scenario?
Thanks,

Related

How to rename photo files using awk, such that they are named (and hence ordered) by "date taken"?

I have 3 groups of photos, from 3 different cameras (with time sychronised onboard all cameras) but with different naming schemes (e.g.: IMG_3142.jpg, DCM_022.jpg). I would like to rename every photo file with the following naming convention:
1_yyyy_mm_dd_hh_mm_ss.jpg for earliest
2_yyyy_mm_dd_hh_mm_ss.jpg for next earliest, and so on,
until we reach around 5000_yyyy_mm_dd_hh_mm_ss.jpg for the last one (i.e. the most recent)
I would like the yyyy_mm_dd_hh_mm_ss field to be replaced by the “date and time taken” value for when this photo was taken. Which is saved in the metadata/properties of each file.
I have seen awk used to carry out similar operations but I'm not familiar enough to know how to access the “time taken” metadata, etc.
Also, not that this should make a difference: my computer is a mac.
You can use jhead for this. The command is:
jhead -n%Y_%m_%d_%H_%M_%S *.jpg
Make a COPY of your files first before running it! You can install jhead with homebrew using:
brew install jhead
Or, if you don't have homebrew, download here for OS X.
That will get you the date in the filename as you wish. The sequence number is a little more difficult. Try what I am suggesting above and, if you are happy with that, we can work on the sequence number maybe. Basically, you would run jhead again to set the file modification times of your files to match the time they were shot - then the files can be made to show up in the listing in date order and we can put your sequence number on the front.
So, to get the file's date set on the computer to match the time it was taken, do:
jhead -ft *.jpg
Now all the files will be dated on your computer to match the time the photos were taken. Then we need to whizz through them in a loop with our script adding in the sequence number:
#!/bin/bash
seq=1
# List files in order, oldest first
for f in $(ls -rt *jpg)
do
# Work out new name
new="$seq_$f"
echo Rename $f as $new
# Remove "#" from start of following command if things look good so the renaming is actually done
# mv "$f" $new"
((seq++))
done
You would save that in your HOME directory as renamer, then you would go into Terminal and make the script executable like this:
chmod +x renamer
Then you need to go to where your photos are, say Desktop\Photos
cd "$HOME/Desktop/Photos"
and run the script
$HOME/renamer
That should do it.
By the way, I wonder how wise it is to use a simple sequence number at the start of your filenames because that will not make them come up in order when you look at them in Finder.
Think of file 20 i.e. 20_2015_02_03_11:45:52.jpg. Now imagine that files starting with 100-199 will be listed BEFORE file 2o, also files 1000-1999 will also be listed before file 20 - because their leading 1s come before file 20's leading 2. So, you may want to name your files:
0001_...
0002_...
0003_...
...
0019_...
0020_...
then they will come up in sequential order in Finder. If you want that, use this script instead:
#!/bin/bash
seq=1
for f in $(ls -rt *jpg)
do
# Generate new name with zero-padded sequence number
new=$(printf "%04d_$f" $seq)
echo Rename $f as $new
# Remove "#" from start of following command if things look good so the renaming is actually done
# mv "$f" $new"
((seq++))
done

How to access two different routines in two files in Trace32 CMM scripts

I have two files in two different floder locations in Trace32. I execute cd.do file_name subroutine_name in Trace32. The trace32 takes the location of first command executed as the folder from which the following commands needs to be executed. How can I execute the routines from two different folders.
There is a pretty good guide here on how to script in Trace32.
http://www2.lauterbach.com/pdf/practice_user.pdf
I do not understand why you need to have them in two different folders, shouldn't it be solved by just have it in the same folder?
Well, maybe you should simply use DO <myscript.cmm> instead of CD.DO <myscript.cmm>.
DO <myscript.cmm> executes the script at the given location but keeps the current working path.
CD.DO <myscript.cmm> changes the working path to the location of the given script and then executes the script.
However I would recommend to write your scripts in a way that it doesn't matter if they are called with CD.DO or just DO. You can achieve that with either absolute paths or with paths relative to the script locations. (I prefer the 2nd one.)
So imagine the following file structure:
C:\t32\myscripts\start.cmm
C:\t32\myscripts\folder1\routines.cmm
C:\t32\myscripts\folder2\loadapp.cmm
C:\t32\myscripts\folder2\application.elf
You can cope this structure with absolute paths like that:
start.cmm:
DO "C:/t32/myscripts/folder1/routines.cmm" subroutine_A
DO "C:/t32/myscripts/folder2/loadapp.cmm"
folder2/loadapp.cmm:
Data.LOAD.Elf "C:/t32/myscripts/folder2/application.elf"
DO "C:/t32/myscripts/folder1/routines.cmm" subroutine_B
With relative paths you could use the prefix "~~~~" before accessing other files relative from the location of the currently executed PRACTICE script. The "~~~~" is replaced with the path of the currently executed script (just like "~" stands for your home directory.) There is also a function OS.PPD() which gives you the directory of the currently executed PRACTICE script.
So above situation with relative paths look like that:
start.cmm:
DO "~~~~/folder1/routines.cmm subroutine_A"
DO "~~~~/folder2/loadapp.cmm"
folder2/loadapp.cmm:
Data.LOAD.Elf "~~~~/application.elf"
DO "~~~~/../folder1/routines.cmm" subroutine_B

Recursive rsync over ssh, include only one file extension

I'm trying to rsync files over ssh from a server to my machine. Files are in various subdirectories, but I only want to keep the ones that match a certain pattern (IE blah.txt). I have done extensive googling and searching on stackoverflow, and I've tried just about every permutation of --include and --excludes that have been suggested. No matter what I try, rsync grabs all files.
Just as an example of one of my attempts, I have used:
rsync -avze 'ssh' --include='*blah*.txt' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
To troubleshoot, I tried this command:
rsync -avze 'ssh' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
expecting it to not copy anything, but it still grabbed all of the files.
I am using rsync version 2.6.9 on OSX.
Is there something obvious I'm missing? I've been struggling with this for quite a while.
I was able to find a solution, with a caveat. Here is the working command:
rsync -vre 'ssh' --prune-empty-dirs --include='*/' --include='*blah*.txt' --exclude='*' user#server.com:/path/to/server/files /path/to/local/files
However! If I type this into my command line directly, it works. If I save it to a file, myfile.txt, and I try `cat myfile.txt` it no longer works! This makes no sense to me.
OSX follows BSD style rsync
https://www.freebsd.org/cgi/man.cgi?query=rsync&apropos=0&sektion=0&manpath=FreeBSD+8.0-RELEASE+and+Ports&format=html
-C, --cvs-exclude
This is a useful shorthand for excluding a broad range of files
that you often don't want to transfer between systems. It uses a
similar algorithm to CVS to determine if a file should be
ignored.
The exclude list is initialized to exclude the following items
(these initial items are marked as perishable -- see the FILTER
RULES section):
RCS SCCS CVS CVS.adm RCSLOG cvslog.* tags TAGS
.make.state .nse_depinfo *~ #* .#* ,* _$* *$ *.old *.bak
*.BAK *.orig *.rej .del-* *.a *.olb *.o *.obj *.so *.exe
*.Z *.elc *.ln core .svn/ .git/ .bzr/
then, files listed in a $HOME/.cvsignore are added to the list
and any files listed in the CVSIGNORE environment variable (all
cvsignore names are delimited by whitespace).
Finally, any file is ignored if it is in the same directory as a
.cvsignore file and matches one of the patterns listed therein.
Unlike rsync's filter/exclude files, these patterns are split on
whitespace. See the cvs(1) manual for more information.
If you're combining -C with your own --filter rules, you should
note that these CVS excludes are appended at the end of your own
rules, regardless of where the -C was placed on the command-
line. This makes them a lower priority than any rules you spec-
ified explicitly. If you want to control where these CVS
excludes get inserted into your filter rules, you should omit
the -C as a command-line option and use a combination of --fil-
ter=:C and --filter=-C (either on your command-line or by
putting the ":C" and "-C" rules into a filter file with your
other rules). The first option turns on the per-directory scan-
ning for the .cvsignore file. The second option does a one-time
import of the CVS excludes mentioned above.
-f, --filter=RULE
This option allows you to add rules to selectively exclude cer-
tain files from the list of files to be transferred. This is
most useful in combination with a recursive transfer.
You may use as many --filter options on the command line as you
like to build up the list of files to exclude. If the filter
contains whitespace, be sure to quote it so that the shell gives
the rule to rsync as a single argument. The text below also
mentions that you can use an underscore to replace the space
that separates a rule from its arg.
See the FILTER RULES section for detailed information on this
option.

How to Input Redirect Two Files to Standard Input?

Is it possible to redirect two or more files to standard input in one command? For example
$ myProgram < file1 < file 2
I tried that command however, it seemed like the OS is only taking the first file and ignoring the other...
If not, how can I achieve that?
NOTE: concatenating the two files will not help in my case.
When you do this from bash, it isn't inputting multiple files to standard input, it is called Process Substitution
The output is sent to an file descriptor under /dev/fd/<n> for each substitution

Finding files in subdirectories created after a certain date

I'm in the process of writing a bash script (just learning it) which needs to find files in subdirectories created after a certain date. I have a folder /images/ with jpegs in various subfolders - I want to find all jpegs uploaded to that directory (or any subdirectories) after a certain date. I know about the -mtime flag, but my "last import" date is stored in %Y-%m-%d format and it'd be nice to use that if possible?
Also, each file/pathname will then be used to generate a MySQL SELECT query. I know find generally outputs the filenames found, line-by-line. But if find isn't actually the command that I should be using, it'd be nice to have a similar output format I could use to generate the SELECT query (WHERE image.file_name IN (...))
Try below script:
DATE=<<date>>
SEARCH_PATH=/images/
DATE=`echo $DATE|sed 's/-//g'`
DATE=$DATE"0000"
FILE=~/timecheck_${RANDOM}_$(date +"%Y%m%d%H%M")
touch -t $DATE $FILE
find $SEARCH_PATH -newer $FILE 2>/dev/null|awk 'BEGIN{f=0}{if(f==1)printf("\"%s\", ",l);l=$0;f=1}END{printf("\"%s\"",l)}'
rm -f $FILE
You can convert your date into the "last X days" format that find -mtime expects.
find is the correct command for this task. Send its output somewhere, then parse the file into the query.
Beware of SQL injection attacks if the files were uploaded by users. Beware of special-character quoting even if they weren't.