In a Makefile, reading user input terminated by <ENTER> can be implemented by using (the shell function) read. On the shell (bash), reading a single character can be done with read -n 1. However, I was surprised that read -n 1 didn't seem to work with GNU Make 4.2.1. Do I miss some escaping here?
Bash:
$> echo x | read -n 1 mychar; echo 'you typed '$mychar
you typed x
Makefile:
all:
read -n 1 mychar; echo 'you typed '$$mychar
Make:
$> echo x | make
read -n 1 mychar; echo 'you typed '$mychar
/bin/sh: 1: read: Illegal option -n
Does make provide its own version of read ?
System: GNU Make 4.2.1, Ubuntu 20.04.3 LTS
PS: I am aware that user interaction is considered bad style, and it clearly should not be used in configuration workflows. However, it comes in quite handy for targets associated with pruning, resetting or deleting data (e.g., make clear). And asking for y/N confirmation gives you a chance to tell your users what it will take them to rebuild what they are going to remove.
PPS: I know that the regular read gives you almost the same functionality, except that you need to hit <ENTER>. This question about getting a better understanding for the relationship between make and its shell enviroment.
NB: This problem is different from Makefile - Why is the read command not reading the user input?. They just didn't properly escape the variable.
Related
I am testing some PHP apps for injectable commands. I have to convert my commands to a URI/CGI encoded format. I am wondering if there is a better way to do it.
When I want to include a ping (to test if the app is, in fact, executing from an injection) I am converting it as follows.
hURL -X --esc ";ping localhost -c 1" | sed -e ‘s/\\x/\%/g’
Here is the output.
%3b%20%70%69%6e%67%20%6c%6f%63%61%6c%68%6f%73%74%20%2d%63%20%31
Works perfect. The code is injected and logs are showing it being handled as expected.
QUESTION: Is there a better way to convert to the above. I think I am over complicating things.
You could possibly use an out-of-the-box library for doing the escaping, may be a little easier on the eye ...
$ echo ';ping localhost -c 1' | perl -ne 'use URI::Escape; print(uri_escape($_) . "\n");'
%3Bping%20localhost%20-c%201%0A
Obviously this output does not escape legitimate url chars so not sure this entirely answers your question ...
I'm just switch to zsh and now adapting the alias in which was printing some text (in color) along with a command.
I have been trying to use the $fg array var, but there is a side effect, all the command is printed before being executed.
The same occur if i'm just testing a echo with a color code in the terminal:
echo $fg_bold[blue] "test"
]2;echo "test" test #the test is in the right color
Why the command print itself before to do what it's supposed to do ? (I precise this doesn't happen when just printing whithout any wariable command)
Have I to set a specific option to zsh, use echo with a special parameter to get ride of that?
Execute the command first (keep its output somewhere), and then issue echo. The easiest way I can think of doing that would be:
echo $fg[red] `ls`
Edit: Ok, so your trouble is some trash before the actual output of echo. You have some funny configuration that is causing you trouble.
What to do (other than inspecting your configuration):
start a shell with zsh -f (it will skip any configuration), and then re-try the echo command: autoload colors; colors; echo $fg_bold[red] foo (this should show you that the problem is in your configuration).
Most likely your configuration defines a precmd function that gets executed before every command (which is failing in some way). Try which precmd. If that is not defined, try echo $precmd_functions (precmd_functions is an array of functions that get executed before every command). Knowing which is the code being executed would help you search for it in your configuration (which I assume you just took from someone else).
If I had to guess, I'd say you are using oh-my-zsh without knowing exactly what you turned on (which is an endless source of troubles like this).
I don't replicate your issue, which I think indicates that it's either an option (that I've set), or it's a zsh version issue:
$ echo $fg_bold[red] test
test
Because I can't replicate it, I'm sure there's an option to stop it happening for you. I do not know what that option is (I'm using heavily modified oh-my-zsh, and still haven't finished learning what all the zsh options do or are).
My suggestions:
You could try using print:
$ print $fg_bold[red] test
test
The print builtin has many more options than echo (see man zshbuiltins).
You should also:
Check what version zsh you're using.
Check what options (setopt) are enabled.
Check your ~/.zshrc (and other loaded files) to see what, if any, options and functions are being run.
This question may suggest checking what TERM you're using, but reading your question it sounds like you're only seeing this behaviour (echoing of the command after entry) when you're using aliases...?
I want (GNU) make to rebuild when variables change. How can I achieve this?
For example,
$ make project
[...]
$ make project
make: `project' is up to date.
...like it should, but then I'd prefer
$ make project IMPORTANTVARIABLE=foobar
make: `project' is up to date.
to rebuild some or all of project.
Make wasn't designed to refer to variable content but Reinier's approach shows us the workaround. Unfortunately, using variable value as a file name is both insecure and error-prone. Hopefully, Unix tools can help us to properly encode the value. So
IMPORTANTVARIABLE = a trouble
# GUARD is a function which calculates md5 sum for its
# argument variable name. Note, that both cut and md5sum are
# members of coreutils package so they should be available on
# nearly all systems.
GUARD = $(1)_GUARD_$(shell echo $($(1)) | md5sum | cut -d ' ' -f 1)
foo: bar $(call GUARD,IMPORTANTVARIABLE)
#echo "Rebuilding foo with $(IMPORTANTVARIABLE)"
#touch $#
$(call GUARD,IMPORTANTVARIABLE):
rm -rf IMPORTANTVARIABLE*
touch $#
Here you virtually depend your target on a special file named $(NAME)_GUARD_$(VALUEMD5) which is safe to refer to and has (almost) 1-to-1 correspondence with variable's value. Note that call and shell are GNU Make extensions.
You could use empty files to record the last value of your variable by using something like this:
someTarget: IMPORTANTVARIABLE.$(IMPORTANTVARIABLE)
#echo Remaking $# because IMPORTANTVARIABLE has changed
touch $#
IMPORTANTVARIABLE.$(IMPORTANTVARIABLE):
#rm -f IMPORTANTVARIABLE.*
touch $#
After your make run, there will be an empty file in your directory whose name starts with IMPORTANTVARIABLE. and has the value of your variable appended. This basically contains the information about what the last value of the variable IMPORTANTVARIABLE was.
You can add more variables with this approach and make it more sophisticated using pattern rules -- but this example gives you the gist of it.
You probably want to use ifdef or ifeq depending on what the final goal is. See the manual here for examples.
I might be late with an answer, but here is another way of doing such a dependency with Make conditional syntax (works on GNU Make 4.1, GNU bash, Bash on Ubuntu on Windows version 4.3.48(1)-release (x86_64-pc-linux-gnu)):
1 ifneq ($(shell cat config.sig 2>/dev/null),prefix $(CONFIG))
2 .PHONY: config.sig
3 config.sig:
4 #(echo 'prefix $(CONFIG)' >config.sig &)
5 endif
In the above sample we track the $(CONFIG) variable, writing it's value down to a signature file, by means of the self-titled target which is generated under condition when the signature file's record value is different with that of $(CONFIG) variable. Please, note the prefix on lines 1 and 4: it is needed to distinct the case, when signature file doesn't exist yet.
Of course, consumer targets specify config.sig as a prerequisite.
So I found the question about how to view the change history of a file, but the change history of this particular file is huge and I'm really only interested in the changes of a particular method. So would it be possible to see the change history for just that particular method?
I know this would require git to analyze the code and that the analysis would be different for different languages, but method/function declarations look very similar in most languages, so I thought maybe someone has implemented this feature.
The language I'm currently working with is Objective-C and the SCM I'm currently using is git, but I would be interested to know if this feature exists for any SCM/language.
Recent versions of git log learned a special form of the -L parameter:
-L :<funcname>:<file>
Trace the evolution of the line range given by "<start>,<end>" (or the function name regex <funcname>) within the <file>. You may not give any pathspec limiters. This is currently limited to a walk starting from a single revision, i.e., you may only give zero or one positive revision arguments. You can specify this option more than once.
...
If “:<funcname>” is given in place of <start> and <end>, it is a regular expression that denotes the range from the first funcname line that matches <funcname>, up to the next funcname line. “:<funcname>” searches from the end of the previous -L range, if any, otherwise from the start of file. “^:<funcname>” searches from the start of file.
In other words: if you ask Git to git log -L :myfunction:path/to/myfile.c, it will now happily print the change history of that function.
Using git gui blame is hard to make use of in scripts, and whilst git log -G and git log --pickaxe can each show you when the method definition appeared or disappeared, I haven't found any way to make them list all changes made to the body of your method.
However, you can use gitattributes and the textconv property to piece together a solution that does just that. Although these features were originally intended to help you work with binary files, they work just as well here.
The key is to have Git remove from the file all lines except the ones you're interested in before doing any diff operations. Then git log, git diff, etc. will see only the area you're interested in.
Here's the outline of what I do in another language; you can tweak it for your own needs.
Write a short shell script (or other program) that takes one argument -- the name of a source file -- and outputs only the interesting part of that file (or nothing if none of it is interesting). For example, you might use sed as follows:
#!/bin/sh
sed -n -e '/^int my_func(/,/^}/ p' "$1"
Define a Git textconv filter for your new script. (See the gitattributes man page for more details.) The name of the filter and the location of the command can be anything you like.
$ git config diff.my_filter.textconv /path/to/my_script
Tell Git to use that filter before calculating diffs for the file in question.
$ echo "my_file diff=my_filter" >> .gitattributes
Now, if you use -G. (note the .) to list all the commits that produce visible changes when your filter is applied, you will have exactly those commits that you're interested in. Any other options that use Git's diff routines, such as --patch, will also get this restricted view.
$ git log -G. --patch my_file
Voilà!
One useful improvement you might want to make is to have your filter script take a method name as its first argument (and the file as its second). This lets you specify a new method of interest just by calling git config, rather than having to edit your script. For example, you might say:
$ git config diff.my_filter.textconv "/path/to/my_command other_func"
Of course, the filter script can do whatever you like, take more arguments, or whatever: there's a lot of flexibility beyond what I've shown here.
The closest thing you can do is to determine the position of your function in the file (e.g. say your function i_am_buggy is at lines 241-263 of foo/bar.c), then run something to the effect of:
git log -p -L 200,300:foo/bar.c
This will open less (or an equivalent pager). Now you can type in /i_am_buggy (or your pager equivalent) and start stepping through the changes.
This might even work, depending on your code style:
git log -p -L /int i_am_buggy\(/,+30:foo/bar.c
This limits the search from the first hit of that regex (ideally your function declaration) to thirty lines after that. The end argument can also be a regexp, although detecting that with regexp's is an iffier proposition.
git log has an option '-G' could be used to find all differences.
-G Look for differences whose added or removed line matches the
given <regex>.
Just give it a proper regex of the function name you care about. For example,
$ git log --oneline -G'^int commit_tree'
40d52ff make commit_tree a library function
81b50f3 Move 'builtin-*' into a 'builtin/' subdirectory
7b9c0a6 git-commit-tree: make it usable from other builtins
The correct way is to use git log -L :function:path/to/file as explained in eckes answer.
But in addition, if your function is very long, you may want to see only the changes that various commit had introduced, not the whole function lines, included unmodified, for each commit that maybe touch only one of these lines. Like a normal diff does.
Normally git log can view differences with -p, but this not work with -L.
So you have to grep git log -L to show only involved lines and commits/files header to contextualize them. The trick here is to match only terminal colored lines, adding --color switch, with a regex. Finally:
git log -L :function:path/to/file --color | grep --color=never -E -e "^(^[\[[0-9;]*[a-zA-Z])+" -3
Note that ^[ should be actual, literal ^[. You can type them by pressing ^V^[ in bash, that is Ctrl + V, Ctrl + [. Reference here.
Also last -3 switch, allows to print 3 lines of output context, before and after each matched line. You may want to adjust it to your needs.
Show function history with git log -L :<funcname>:<file> as showed in eckes's answer and git doc
If it shows nothing, refer to Defining a custom hunk-header to add something like *.java diff=java to the .gitattributes file to support your language.
Show function history between commits with git log commit1..commit2 -L :functionName:filePath
Show overloaded function history (there may be many function with same name, but with different parameters) with git log -L :sum\(double:filepath
git blame shows you who last changed each line of the file; you can specify the lines to examine so as to avoid getting the history of lines outside your function.
I'm trying to use Doxygen with Xcode. I followed the Apple tutorial. After several mistakes, I builded the project and generated the docs. I discovered that if you save the doxygen.config from Doxygen and you use space " " in the directory name you will have problem and others things.
But there is one last problem:
./search/search.png
./tab_b.gif
./tab_l.gif
./tab_r.gif
./tabs.css
/Developer/usr/bin/docsetutil index com.mycompany.DoxygenExample.docset
2010-03-31 12:30:53.847 docsetutil[46338:807] Error converting XML to CoreData: Error Domain=NSXMLParserErrorDomain Code=76 UserInfo=0x1247d0 "Line 8: Opening and ending tag mismatch: Subnodes line 0 and Node
"
Failed to create docset indexer object
make: *** [docset] Error 1
load documentation set with path "/Users/WB/Library/Developer/Shared/Documentation/DocSets/"
I don't know what is the problem?? Any idea?
I'm using Core Data - sqlite.
The parser is telling you XML is not well formed, but that error usually shows because nothing has been generated BEFORE running docsetutil.
First thing should be to go over the many lines of console output and look for warnings, probably is there. Also look for the docset you generated and right click > Show Contents. If you don't see a lot of html files with the documentation, same thing: you failed at generating documentation and docsetutil has nothing to do. And btw, it's docsetutil who is using CoreData, doesn't matter if you use it on your project or not.
I don't get why Apple doesn't provide a doxygen-like tool more tightly integrated. Or a better code formatter than Crustify. Just take the damn tools and improve them a little bit. Argh!
There is a know bug from generation of Nodes.xml by Doxygen. It is referenced here https://bugzilla.gnome.org/show_bug.cgi?id=671591 and should be corrected in the next doxygen Version (Post V 1.8.0) :
At the end of the Nodes.xml there is an additional
the -silence option is workaround to suppress error, but this param does not allow dosetgeneration to work properly.
$DOXYGEN_PATH $TEMP_DIR/doxygen.config
make -C $TEMP_DIR/DoxygenDocs.docset/html install
Insert following code
Note : The script works in $TEMP_DIR and not in SOURCE_ROOT as AppleScript
$DOXYGEN_PATH $TEMP_DIR/doxygen.config
# make will invoke docsetutil. Take a look at the Makefile to see how this is done.
LINE=`xmllint --c14n $TEMP_DIR/DoxygenDocs.docset/html/Nodes.xml 2>&1 | awk 'NR == 1 {print $1}' | cut -d':' -f 2`
ECHO $LINE
if [ $LINE -gt 0 ]
then
echo "XML Cleaning "
sed -i.bak $LINE'd' $TEMP_DIR/DoxygenDocs.docset/html/Nodes.xml
fi
make -C $TEMP_DIR/DoxygenDocs.docset/html install
NB: awk and sed may certainly be combined in one line.
So the long story short is that the script creates a Doxyfile on the fly, and it does not recursively scan all subdirectories.
Take a look at this post:
http://www.duckrowing.com/2010/03/18/documenting-objective-c-with-doxygen-part-ii/
There's a script included on the second post that is based on Apple's script that shouldn't have this issue.
I use an extended version of the above script but based on the same priniciples. Although everything works fine on another project this time my script fails.
The generation of the docset works fine but the make command produces the following error.
x ./search/search_r.png
2010-07-26 17:36:01.815 docsetutil[8441:903]
Error converting XML to CoreData:
Error Domain=NSXMLParserErrorDomain
Code=76
UserInfo=0x1006105e0
"Line 8: Opening and ending tag mismatch: Subnodes line 0 and Node"
Failed to create docset indexer object
make: *** [docset] Error 1
The make command I use is: make --silent -C "$DOCSET_OUTPUT/html" install.
I added line breaks to the error message for readability.