Resource file cannot contain any tests or tasks - Robot File - testing

I have a main robot script that references keywords from a resource file. The resource file contains the tests to be run, and I simply call the functionalities from my main robot script. I am receiving the following error:
Resource file cannot contain any tests or tasks
What might be the cause for this, and how do I go about resolving this issue?

From the Robot Framework Guide on Resource Files:
The higher-level structure of resource files is the same as that of
test case files otherwise, but, of course, they cannot contain Test
Case tables.
In summary: Test Case files need to contain test cases and can contain keywords. Resource files also can contain keywords (they do not need to) but do not contain test cases.

I also had the same problem. The issue is we can't use all the OO concepts as we are not using python directly. You can contain your variables and keywords in resource files.
As a solution what you can do is write your test case (that you want to import) as a keyword and then use the keyword inside the test case.
If you leave the Test cases section empty in the file that you are going to execute, Robot will think you have no test cases to execute.
So, Think of keywords as public methods then you'll solve the issue :)

Related

Testing with Cucumber and Selenium:How can i break the sequence of feature file and run in random sequence

I have tried many option but any of them not work as i need. I have created some feature file as i required time by time. so i have a disorder feature file.
I want to run the feature file as i wanted. like.
fil1.feature
fil2.feature
fil3.feature
fil4.feature
so i want to run in this sequence : file3.feature->fil4.feature->fil1.feature.
I have tried #tag, #feature in junit test runner option but its maintain the sequence its run only 3,4 but can't run 1.
So can you tell me how to run feature file randomly???
Cucumber picks up the feature files in alphabetic order from the folder given in the features parameter of the CucumberOptions. So one options would be to rename your feature files alphabetically in the order you want.
After the feature files in the initial folder are read, then the sub folders in the location are picked up alphabetically and the feature files inside them are read. So you can place the file you want to be used later into a sub-folder.
Saying all this, it is not a very good idea to have any dependency between tests which requires a sequence to be maintained.
I got the same stop. Have you solved this problem?
I did some test, use --tags #XXX to controll the sequence is useless, it's just action on different scenarios in a feature file.
so far, I have to do it like this cucumber features/c.feature features/a.feature features/b.feature I think this is a little bit better than rename feature files.
But you may say that: "If there is a lot of feature files......" In my project(ROR), I assigned the statement to ./config/cucumber.yml
In cucumber.yml I define a profile, test_dev: features/c.feature features/a.feature features/b.feature after that, I just to use cucumber -p test_dev and the feature files execute in order.
If you have a butter way, please share with us.

how to setup tests for mocha-phantomjs

Every tutorial I have seen for mocha-phantomjs shows having a test harness html file, and a separate javascript file that gets included.
Is this the correct way to do this for each test? I want to create a separate test for each page in my website, but it seems like overkill/crazy to duplicate an html file for every test case.
Granged, this is my first time trying to use mocha-phantomjs, but still, it seems really odd to create an html file and a js for every test case.
What is the standard for doing this sort of thing? I have been googling for about an hour now and can't find any good examples.
I know it seems weird, but... yes.
You need fixture (or harness) files in the "/test" directory. By default, Mocha looks in this directory for filenames with a .html extension, starting with test.html.
Make sure to include the script (and css) tags for 1) mocha, 2) chai (or whatever other assertion library you want), 3) and your specific test suites.
Personally, I've found it helps to use it with a modular bootloader like RequireJS. That way all your fixture files can point to a single configuration file: less maintenance.

Rails Testing - Break code

I am writing test cases for ROR. The code to test one model/controller is too much . Is there anyway to break the file in different files for testing the same model/controller.
Yes you can split up tests into different files. I believe test::unit requires files to be *_test.rb.
So say you had a test file for a User model. You could have your tests broken up like:
user_validations_test.rb
user_login_test.rb
...
I know you're using test::unit but, same thing goes with RSpec, you can break up your tests into *_spec.rb files.
If you are using rspec for testing, you can simply add more than one file describing the same class. It might be reasonable to create in spec/ directory matching subdirectory, and place all spec files in there.

CDash Custom Dynamic Analysis

I'm trying to integrate custom dynamic analysis tools to CDash. Such as KWStyle, CppCheck and Visual Leak Detector.
I'v figured out that I need to generate a DynamicAnalysis.xml file and submit it to CDash, from CTest scripts.
I think I know how to run the external tool as a part of the ctest script.
Either by using these variables to change how ctest_memcheck() works
CTEST_MEMORYCHECK_COMMAND
CTEST_MEMORYCHECK_SUPPRESSIONS_FILE
CTEST_MEMORYCHECK_COMMAND_OPTIONS
or by running the tool from the execute_process() command.
But I'm a bit uncertain which one to use.
The main problem I think I have is, how can I extract errors from the output of the custom tool and include that information into the DynamicAnalysis.xml to submit?
The extreme solution i see is that i'd need to make a program that generates a valid DynamicAnalysis.xml file.
But the problem is that I don't know the syntax of the DefectList element in the XML file. I have found no answer from google and even the XML Schema for that file is unhelpful.
EDIT:
Looking at this:
http://www.cdash.org/CDash/viewDynamicAnalysis.php?buildid=987149
What draws my attention are the labels, especially the empty ones. I don't see how these would come from the DynamicAnalysis.xml file. Maybe it tracks any labels that have ever appearred? Can i create my own custom labels somehow?
Does CDash create the labels automatically, depending on the tool type? Does this block custom defect types?
I'm just guessing here, so the question is; can i create custom labels for my custom tool, just by generating a DynamicAnalysis.xml - file.
It occurred to me that the amount of different errors from CppCheck (static code analysis) is huge, compared to valgrind for instance. I'm not that certain that I should use the dynamic analysis. Maybe a custom build type (Continuous / Experimental / Nightly) thing would work better. Like this:
http://www.cdash.org/CDash/buildSummary.php?buildid=930174
I have no idea how to do this, i guess it requires meddling around with CDash code?
Which one would work better?
If you are using valgrind, you can simply set CTEST_MEMORYCHECK_COMMAND to the full path to valgrind, and ctest will generate the DynamicAnalysis.xml file for you from the valgrind output when you call ctest_memcheck.
The best way to understand the possible values that can appear in the DynamicAnalysis.xml file is to analyze the source code of CTest.
The file CMake/Source/CTest/cmCTestMemCheckHandler.cxx has the list of defect types in a variable named "cmCTestMemCheckResultLongStrings". Search through that file for references to that variable to see what the possible values are and how they are used to generate "<Defect/>" xml elements.
EDIT (for additional information):
You can also easily see what XML elements CDash is expecting by inspecting its source code. Specifically, the file "CDash/xml_handlers/dynamic_analysis_handler.php".
From what I'v learned so far, is that for a tool that runs on the tests made in the cmake script, the Dynamic Analysis is the thing.
For tools that run on the entire program, a custom Build.xml is the thing you need.
I found out that i can commit those files from the ctest_submit command by using the FILES parameter.
I also found out that you can add custom "build names" to the side of Continuous, Nightly, and others.
And that you can set the builds from certain machines to be automatically transferred under these.
The custom labels under DynamicAnalysis did come from somewhere in CDash, i can't remember where anymore.

Haskell IO Testing

I've been trying to figure out if there is already an accepted method for testing file io operations in Haskell, but I have yet to find any information that is useful for what I am trying to do.
I'm writing a small library that performs various file system operations (recursively traverse a directory and return a list of all files; sync multiple directories so that each directory contains the same files using inodes as the equality test and hardlinks...) and I want to make sure that they actually work, but the only way I can think of to test them is to create a temporary directory with a known structure and compare the results from the functions executed on this temporary directory with the known results. The thing is, I would like to get as much test coverage as possible while still being mainly automated: I don't want to have create the directory structure by hand.
I have searched google and hackage, but the packages that I have seen on hackage do not use any testing -- maybe I just picked the wrong ones -- and anything I find on google does not deal with IO testing.
Any help would be appreciated
Thanks, James
Maybe you can find a way to make this work for you.
EDIT:
the packages that I have seen on hackage do not use any testing
I have found an unit testing framework for Haskell on Hackage. Including this framework, maybe you could use assertions to verify that the files you require are present in the directories that you want them to be and they correspond to their intended purpose.
HUnit is the usual library for IO-based tests. I don't know of a set of properties/combinators for file actions -- that would be useful.
There is no reason why your test code cannot create a temporary directory, and check its contents after running your impure code.
If you want mainly automated testing of monadic code, you might want to look into Monadic QuickCheck. You can write down properties that you think should be true, such as
If you create a file with read permission, it will be possible to open the file for reading.
If you remove a file, it won't open.
Whatever else you think of...
QuickCheck will then generate random tests.