I am a newbie for software testing. I want to know, is there any open source tool for automated test case generator black-box testing.
I found this tool KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs, but to use this tool I need to do some code instrumentation. Is there any way I can generate automated testcases without code instrumentation as I don't have access to the source code.
KLEE does work with programs without modification. You can have it generate symbolic command line inputs as well as symbolic input files. Here are some example commands that can be used for this purpose:
-sym-arg - Replace by a symbolic argument with length N
-sym-args - Replace by at least MIN arguments and at most
MAX arguments, each with maximum length N
-sym-files - Make stdin and up to NUM symbolic files, each
with maximum size N.
-sym-stdout - Make stdout symbolic.
Examples can be found in the tutorials on KLEE's website.
Related
I´m using Karate testing framework to validate some APIs and would like to know if there is any way to generate a Test Coverage Report by using a predefined list of expected scenarios to run and validate them against the scenarios that actually exist within the Karate feature files.
Imagine that you agree to run 50 scenarios with your client but in reallity you have only developed 20 scenarios within your feature files (more than one stored in different folders)
Wonder if there is any (easy) way to:
list ALL the scenarios developed in ALL the feature files available
match them against an external (csv, excel, json...) list of scenarios (the ones agreed with the client) so that a coverage % could be calculated
Here's a bare bones implementation of a coverage report based on comparing karate.log to an openapi/swagger json spec.
https://github.com/ericdriggs/karate-test-utils#karate-coverage-report
Endpoint coverage is a useful metric which can be auto-generated based on auto-generated spec. It also lets you exclude paths which aren't in scope for coverage, e.g. actuator, ping
Will publish jar soonish.
Open an issue if you'd like any enhancements.
MIT licensed so feel free to repurpose
I'm a noob to fuzz area and looked AFL implementation.
AFL seems to replace stdin file descriptor to input file descriptor. Whenever the target program encounters standard input, the target program takes input from the input file, not the stdin.
So, my question is popped from on this.
Let's say we made a library and we'd like to unit test to find some implementation bug using fuzzer. In this case, we don't take any standard input, just takes only function parameters from developers who use our library. Therefore, AFL doesn't work in this case.
Libfuzzer seems proper solution in this case since generated input can be fed into our specific interesting function.
Is this right understand? or does AFL also can work as libfuzzer for the unit test?
Thank you
Afl supports feeding inputs through files, not only stdin. To test a library that receives input through arguments, you can write a simple executable that will open an input file, read it's contents, call the needed library functions with argument values read from this file and close the file.
Context:
I'm trying to automate some of the more mundane tasks in embedded development with Keil. The end result I'm aiming for is that clicking build in a Keil project will run a pre-build step that runs all the code through Uncrustify (a source code beautifier) to ensure it conforms to the company style-guide, and a post-build step which then runs the code through pc-lint (a static code analyser) to highlight any potentially unsafe code that it might find. I've written a PC utility that searches through the .uvproj file for the #define macros, the include paths and the file-paths all of which are needed for both tools and then modifies the pre and post-build user commands to call up my batch files which will manage both steps. The uncrustify part is working fine and the lint part is producing some sensible messages, but the signal-to-noise ratio isn't that great.
My problem:
Lint keeps on producing messages that seem to relate to macros that the Keil compiler is aware of, but that Lint isn't. I'm trying to find a way to plug that gap. I found a table of predefined macros documented on the Keil website, which seems like a good start, but rather than manually copying them into a static .lnt file, I'd like to find a way of grabbing the up-to-date values at the time the project gets built. This way, the "__ARMCC_VERSION" macro, for instance, would be updated whenever the developer updates his/her Keil compiler, rather than being stuck at a point in time whenever I manually copied it.
I'd love it if someone can answer my question directly, but I'd be equally pleased if someone has a viable suggestion for a more straightforward alternative approach I could try instead. Many thanks!
I am assuming you're using the Keil ARM Compiler.
From the Compiler User Guide:
To list macros that are defined on the command line, predefined by the compiler, and found in header and source files, use --list_macros with a non-empty source file.
To list only macros predefined by the compiler and specified on the command line, use --list_macros with an empty source file.
EDIT:
It looks like your SDK also adds a few macros.
From the µVision User's Guide:
The following control strings are added, depending on the use of MDK:
__UVISION_VERSION:
Major and minor version of µVision. For example: -D__UVISION_VERSION="520".
RTE:
Set when RTE is in use. For example: -D_RTE_.
__RTX:
Set when RTX Kernel has been selected in Options for Target - Target - Operation System. Not set when using RTE. For example: -D__RTX.
__MICORLIB:
Set when Use MicroLIB has been enabled in Options for Target - Target. For example: -D__MICROLIB.
__EVAL:
µVision runs in evaluation mode. License MDK-Lite. For example: -D__EVAL.
device header name:
Device header name.
I'm trying to integrate custom dynamic analysis tools to CDash. Such as KWStyle, CppCheck and Visual Leak Detector.
I'v figured out that I need to generate a DynamicAnalysis.xml file and submit it to CDash, from CTest scripts.
I think I know how to run the external tool as a part of the ctest script.
Either by using these variables to change how ctest_memcheck() works
CTEST_MEMORYCHECK_COMMAND
CTEST_MEMORYCHECK_SUPPRESSIONS_FILE
CTEST_MEMORYCHECK_COMMAND_OPTIONS
or by running the tool from the execute_process() command.
But I'm a bit uncertain which one to use.
The main problem I think I have is, how can I extract errors from the output of the custom tool and include that information into the DynamicAnalysis.xml to submit?
The extreme solution i see is that i'd need to make a program that generates a valid DynamicAnalysis.xml file.
But the problem is that I don't know the syntax of the DefectList element in the XML file. I have found no answer from google and even the XML Schema for that file is unhelpful.
EDIT:
Looking at this:
http://www.cdash.org/CDash/viewDynamicAnalysis.php?buildid=987149
What draws my attention are the labels, especially the empty ones. I don't see how these would come from the DynamicAnalysis.xml file. Maybe it tracks any labels that have ever appearred? Can i create my own custom labels somehow?
Does CDash create the labels automatically, depending on the tool type? Does this block custom defect types?
I'm just guessing here, so the question is; can i create custom labels for my custom tool, just by generating a DynamicAnalysis.xml - file.
It occurred to me that the amount of different errors from CppCheck (static code analysis) is huge, compared to valgrind for instance. I'm not that certain that I should use the dynamic analysis. Maybe a custom build type (Continuous / Experimental / Nightly) thing would work better. Like this:
http://www.cdash.org/CDash/buildSummary.php?buildid=930174
I have no idea how to do this, i guess it requires meddling around with CDash code?
Which one would work better?
If you are using valgrind, you can simply set CTEST_MEMORYCHECK_COMMAND to the full path to valgrind, and ctest will generate the DynamicAnalysis.xml file for you from the valgrind output when you call ctest_memcheck.
The best way to understand the possible values that can appear in the DynamicAnalysis.xml file is to analyze the source code of CTest.
The file CMake/Source/CTest/cmCTestMemCheckHandler.cxx has the list of defect types in a variable named "cmCTestMemCheckResultLongStrings". Search through that file for references to that variable to see what the possible values are and how they are used to generate "<Defect/>" xml elements.
EDIT (for additional information):
You can also easily see what XML elements CDash is expecting by inspecting its source code. Specifically, the file "CDash/xml_handlers/dynamic_analysis_handler.php".
From what I'v learned so far, is that for a tool that runs on the tests made in the cmake script, the Dynamic Analysis is the thing.
For tools that run on the entire program, a custom Build.xml is the thing you need.
I found out that i can commit those files from the ctest_submit command by using the FILES parameter.
I also found out that you can add custom "build names" to the side of Continuous, Nightly, and others.
And that you can set the builds from certain machines to be automatically transferred under these.
The custom labels under DynamicAnalysis did come from somewhere in CDash, i can't remember where anymore.
I'm confused by the concept of scripts.
Can I say that makefile is a kind of script?
Are there scripts written in C or Java?
I'd refer to Wikipedia for a detailed explanation.
"Scripts" usually refer to a piece of code or set of instructions that run in the context of another program. They usually aren't a standalone executable piece of software.
Makefiles are a script that is run by "make", or MSBuild, etc.
C needs to be compiled into an executable or a library, so programs written in (standard) C would typically not be considered scripts. (There are exceptions, but this isn't the normal way of working with C.)
Java (and especially .net) is a bit different. A typical java program is compiled and run as an executable, but this is a grey area. It is possible to do runtime compilation of a "script" written in java and execute it.
In a very general sense the term "Scripts" relates to code that is deployed and expected to run from the lexical representation. As soon as you compile the code and distribute the resulting output instead of the code it ceases to be a "Script".
Minification and obsfication of a script is not consided a compile and the result is still consider a script.
It depends on your definition of script. For me, a script could be any small program you write for a small purpose. They are usually written in interpreted languages. However, there's nothing stopping you from writing a small program in a compiled language.
For me a script has to consist of a single file. And that file must be able to perform the task for which the script was written with no intermediate steps.
So these would be OK:
bash backup_my_home_dir.sh
perl munge_some_text.pl
python download_url.py
But this wouldn't qualify, even if the file is small:
javac HandyUtility.java
java HandyUtility
Yes it's possible to do scripting in Java. I've seen it many times :)
(this was sarcasm for bad spaghetti code)
The term 'scripting' can cover a fairly broad spectrum of activities. Examples being programming in imperative interpreted languages such as VBScript, Python, or shell scripts such as csh or bash, or expressing a task in declaritive languages such as XSL, SQL or Erlang.
Some scripting languages fall into a category referred to as Domain Specific Languages (DSL's). Good examples of DSL's are 'makefile's, many other types of configuration files, SQL, XSL and so on.
What you're asking is fairly subjective, one man's script is another man's application. If your interpretation of scripting means that using scripting languages should not force a user to follow the traditional compile -> link -> run cycle, then you could form the opinion that you can't write 'scripts' in C or Java.
A script is basically a non-compilable text file in almost any language, or shell, with an interpreter that is used to automate some process, or list of commands, that you perform repeatedly. Scripts are often used for backing up files, compiling routines, svn commits, shell initialization, etc., ad infinitum. There are a million and one things you can do with a script that an executable (complete with installation, etc.) would simply be overkill for.
I write scripts in F#. A recent one is a small data loader to take in some set of data, do a bit of processing to it, and dump it in a DB. ~40 lines. No separate compilation step needed; I can just make F# Interactive run it directly.
Benefit is that I get a fully powered language with a great IDE and all the safety static checking provides, while inference makes it not get verbose like say, Java or C#.
So, that's one language that offers a reasonably decent type system, compilation and checking, isn't interpreteded, but works fine for scripting.