I am trying to Fuzz a binary file that takes input from the user(Stdin). When I try Afl-fuzz and then my binary something like
afl-fuzz a.out
It asks for the required parameters that are specifying the input and output directories.
afl-fuzz -i input_dir -o output_dir a.out
It gives the error Looks like there are no valid test cases in the input directory.
If I provide a sample test case and try again it starts fuzzing but doesn't ask for input from the user.
I am a total noob in this field so any kind of help would be appreciated.
Related
I have an a bunch of R scripts that follow one another and I wanted to connect them using Snakemake. But I’m running in a problem.
One of my R scripts shows two images and asks a user’s input on how many cluster there are present. The R function for this is [readline]
This query on how many clusters is asked but directly after the next line of code is run. Without an opportunity to input a number. the rest of the program crashes, since trying to calculate (empty number) of clusters doesn’t really work. Is there a way around this. By getting the values via a function/rule from Snakemake
or is there a other way to work around this issue?
Based on my testing with snakemake v5.8.2 in MacOS, this is not a snakemake issue. Example setup below works without any problem.
File test.R
cat("What's your name? ")
x <- readLines(file("stdin"),1)
print(x)
File Snakefile
rule all:
input:
"a.txt",
"b.txt"
rule test_rule:
output:
"{sample}.txt"
shell:
"Rscript test.R; touch {output}"
Executing them with command snakemake -p behaves as expected. That is, it asks for user input and then touch output file.
I used function readLines in R script, but this example shows that error you are facing is likely not a snakemake issue.
I was using valgrind tool - callgrind and kcachegrind for profiling a large project and was wondering if there is a way that callgrind reports the stats from all the functions (not just the most expensive functions).
To be specific - When I visualized the callgraph in kcachegrind, it included only those functions that are quite expensive, but I was wondering if there is a way to include all the functions from the project in the callgraph. Command used for generating profiling info is given below :
valgrind --dsymutil=yes --tool=callgrind $EXE
I am not sure if I have to give any options to valgrind or may be compile the application at a different optimization. This might be something trivial but I couldn't find a solution. Any pointers regarding this highly appreciated.
Thanks !
It occurred to me yesterday. As shown in the picture, I found in call graph of kcachegrind, there is a right click menu, in which you can set up the threshold above which the node will be visualized.
There is also a option "no minimum", however it can not be chosen. I think maybe it's because, if every function, no matter how trivial it is, takes up a node, the graph may be too large to handle.
I just found that the script gprof2dot can handle this.
The script can convert the output of callgrind to dot, which can be visualized as graph. The script has two relevant parameters:
-n PERCENTAGE, --node-thres=PERCENTAGE to eliminate nodes below this threshold [default: 0.5]. In order to visualize all nodes in the graph, you can set the parameter like -n0
-e PERCENTAGE, --edge-thres=PERCENTAGEto eliminate edges below this threshold [default: 0.1]. In order to visualize all edges in the graph, you can set the parameter like -e0
In order to generate the complete call graph you would use both of the options (-n0 and -e0).
I've tried this, however, as the graph generated is too large, the dot software warned me that "graph is too large for cairo-renderer bitmaps. Scaling by 0.328976 to fit. " But you can set up the output format as eps which can handle this. You also can change the parameter to adapt your objective.
Example
Let's say that you have a callgrind output file called callgrind.out.1992. To generate a complete call graph you would use:
gprof2dot.py -n0 -e0 ./callgrind.out.1992 -f callgrind
To generate a PNG output image of the graph, you could run the following commands:
gprof2dot -n0 -e0 ./callgrind.out.1992 -f callgrind > out.dot
dot -Tpng out.dot -o out.png
Now you have an out.png image with the full graph.
Note the usage of the -f parameter to specify the profile format (callgrind in our case).
The command I am using is
valgrind --tool=callgrind --dump-instr=yes --collect-jumps=yes $EXE and as far as I have seen it includes all the functions in the call graph.
Hope it helps.
I'm going to complete rengar's answer with information that will allow you to generate the complete call graph, as well as give an example of the full process.
You can use the gprof2dot to show all functions in a callgraph. The script can convert the output of callgrind to dot, which can be visualized as graph. The script has two relevant parameters:
-n PERCENTAGE, --node-thres=PERCENTAGE to eliminate nodes below this threshold [default: 0.5]. In order to visualize all nodes in the graph you should set this parameter to -n0
-e PERCENTAGE, --edge-thres=PERCENTAGEto eliminate edges below this threshold [default: 0.1]. In order to visualize all edges in the graph you should set this parameter to -e0
In order to generate the complete call graph you would use both of the options: -n0 and -e0.
Example
Let's say that you have a callgrind output file called callgrind.out.1992. To generate a complete call graph you would use:
gprof2dot -n0 -e0 ./callgrind.out.1992 -f callgrind
To generate a PNG output image of the graph, you could run the following commands:
gprof2dot -n0 -e0 ./callgrind.out.1992 -f callgrind > out.dot
dot -Tpng out.dot -o out.png
Now you have an out.png image with the full graph.
Note the usage of the -f parameter to specify the profile format (callgrind in our case).
I have a tsv file which is seperated in class, id and text, e.g.
positive 2342 This is very good.
negative 4343 I hate it.
and I'm trying to feed Mahout's nbayes to classify the text part either pos or neg.
My first attempt was using mahout seqdirectory command on every line as a seperate file in its class directory. This works well with a small amount of data but eventually fails at around 30 Gigabytes of data with OutOfMemoryException. Increasing the heap size fails with "GC overhead limit exceeded" probably because of the large amount of seperate files.
My second attempt was loading the data into a hive table and convert it to a sequence file, as it is described here [0], which seems to work fine at first but after creating the vector file and splitting up the data set the trainnb step fails with an ArrayIndexOutOfBounds Exception.
[0] http://files.meetup.com/6195792/Working%20With%20Mahout.pdf
Right now I'm out of ideas what to look for. Any ideas how I can convert the tsv file or hive table to a sequencefile as it's generated by seqdirectory command on a directory?
Going to answer by myself in case some else needs a solution to the same or similar problem:
I found this code snippet at github and modified it to my needs. Additionally I had to trim the value string to get proper results.
This may be a simpler implementation for those searching for this answer in the future. This can be done completely from the command line (I tested it in EMR):
hadoop jar \
/home/hadoop/contrib/streaming/hadoop-streaming.jar \
-D mapred.reduce.tasks=0 \
-inputformat TextInputFormat \
-input {input_directory}/* \
-mapper '/bin/cat' \
-outputformat org.apache.hadoop.mapred.SequenceFileOutputFormat \
-output {output_directory}
/home/hadoop/contrib/streaming/hadoop-streaming.jar is the location of the hadoop-streaming.jar on Amazon EMR (AMI 3.4.0). It may be a in a different location depending on your configuration.
I have a model of a heating process on Ansys Multiphysics, V11.
After running the simulation, I have a script to plot a temperature profile:
!---------------- POST PROCESSING -----------------------
/post1 ! tdatabase postprocessor
!---define profile temperature
path,s_temp1,2,,100 ! define a path
ppath,1,,dop/2,0,0 ! create a path point
ppath,2,,dop/2,1.5,0 ! create a path point
PDEF,surf_t1,TEMP, ,noav ! print a path
plpath,surf_t1 ! plot a path
What I now need, is to save the resulting path in a text file. I have already looked online for a solution, and found the following code to do it, which I appended after the lines above:
/OUTPUT,filename,extension
PRPATH,surf_t1
/OUTPUT
Ansys generates the file filename.extension but it is empty. I tried to place the OUTPUT command in a few locations in the script, but without any success.
I suspect I need to define something else, but I have no idea where to look, as Ansys documentation online is terribly chaotic, and all internet pages I've opened before writing this question are not better.
A final note: Ansys V11 is an old version of the software, but I don't want to upgrade it and fit the old model to the new software.
For the output of the simulation (which includes all calculation steps, and sub-steps description and node-by-node results) the output must be declared in the beginning of the code, and not in the postprocessing phase.
Declaring
/OUTPUT,filename,extension
in the preamble of the main script makes such that the output is stored in the right location, with the desired extension. At the end of the scripts, you must then declare
/OUTPUT
to reset the output file location for ANSYS.
The output to the PATH call made in the postprocessing script is however not printed in the file.
It is convenient to use
*CFOPEN,file,ext
*VWRITE,Vector(1,1).Vector(1,2)
(2F12.6)
*CFCLOSE
where Vector(1,1) is a two column array created by *DIM, and stores your data to output to file
As this is a special command, run it from file i.e. macro_output.mac
I workin' with Torch7 and Lua programming languages. I need a command that redirects the output of my console to a file, instead of printing it into my shell.
For example, in Linux, when you type:
$ ls > dir.txt
The system will print the output of the command "ls" to the file dir.txt, instead of printing it to the default output console.
I need a similar command for Lua. Does anyone know it?
[EDIT] An user suggests to me that this operation is called piping. So, the question should be: "How to make piping in Lua?"
[EDIT2] I would use this # command to do:
$ torch 'my_program' # printed_output.txt
Have a look here -> http://www.lua.org/pil/21.1.html
io.write seems to be what you are looking for.
Lua has no default function to create a file from the console output.
If your applications logs its output -which you're probably trying to do-, it will only be possible to do this by modifying the Lua C++ source code.
If your internal system has access to the output of the console, you could do something similar to this (and set it on a timer, so it runs every 25ms or so):
dumpoutput = function()
local file = io.write([path to file dump here], "w+")
for i, line in ipairs ([console output function]) do
file:write("\n"..line);
end
end
Note that the console output function has to store the output of the console in a table.
To clear the console at the end, just do os.execute( "cls" ).