I am using Nunit results for managerial view of all the tests. After reading nunit doc it says it automatically update the results xml file after running the tests. But In my case it keep showing me the old results in index file where as updated reults in actual file. Any idea how can I update the index file according to the latest results.
According to NUnit-Console 2.4.8 command line docs the xml output is written by default to TestResult.xml in the working directory. You can use the /xml command line option to specify a different file name. For example:
nunit-console /xml:console-test.xml nunit.tests.dll
My guess is that either you are specifying a filename with the /xml flag but looking in vain for updated results in TestResult.xml or you are not using the /xml flag and looking in vain for updated results in a file with some other name. Probably the former.
Related
I am new to both Liquibase and Flyway. Was trying to do some Hello Worlds. I successfully ran basic SQL ( create insert etc ) using both Liquibase and Flyway. Was interested in running them from command line.
Flyway :
was kind of easy to start with
I had to just Put sql file in correct naming format 'V1_xxxx.sql' in correct folder 'flyway/sql' & Run 'flyway migrate'
the best part was it automatically picked up any new sql file given the correct file name.
LiquiBase :
had to spend some time to understand and use it
Need to give correct file name each time
liquibase --driver=com.mysql.jdbc.Driver --classpath=/path/to/classes --changeLogFile=com/example/db.changelog1.xml --url="jdbc:mysql://localhost/example" --username=dev migrate
liquibase --driver=com.mysql.jdbc.Driver --classpath=/path/to/classes --changeLogFile=com/example/db.changelog2.xml --url="jdbc:mysql://localhost/example" --username=dev migrate
Is there a way in liquibase to automatically pick the new xml files ? like Flyway i could just give folder name and Liquibase could use its table DATABASECHANGELOG to find the deltas and execute same.
Second Question for Liquibase only
In windows in order to run command successfully i had to change the changeLogFile parameter ... from ...
liquibase --driver=com.mysql.jdbc.Driver --classpath=/path/to/classes --changeLogFile=com/example/db.changelog1.xml --url="jdbc:mysql://localhost/example" --username=dev migrate
to
liquibase --driver=com.mysql.jdbc.Driver --classpath=/path/to/classes --changeLogFile=./db.changelog1.xml --url="jdbc:mysql://localhost/example" --username=dev migrate
i.e. i changed my present working directory to com/example and then modified the changeLogFile param to point to a file in current folder and execute command.
Is there a way i can point to changeLogFile in another folder (apart from current folder)
One thing you can do to make liquibase a bit easier to use from the commandline is to create a file named liquibase.properties and save that in the directory where you are running the command. If I remember correctly, the command line will look for files with that name and use the properties in that file rather than requiring all the options on the command line. See http://www.liquibase.org/documentation/liquibase.properties.html and http://www.liquibase.org/documentation/command_line.html#using_a_liquibase.properties_file for more details. The docs there have this:
If you do not want to always specify options on the command line, you
can create a properties file that contains default values. By default,
Liquibase will look for a file called “liquibase.properties” in the
current working directory, but you can specify an alternate location
with the --defaultsFile flag. If you have specified an option in a
properties file and specify the same option on the command line, the
value on the command line will override the properties file value.
Yes, you can have liquibase automatically load files from a directory. You have to have a simple changelog.xml that is referenced from the command line or your properties file, but then that changelog can just reference another directory that contains more changelog files. The <includeAll> tag is used for this purpose (see http://www.liquibase.org/documentation/includeall.html) for more details.
Also, yes, you can put the changelog file wherever you like.
I'm using cmake for the first time and am just not having luck finding examples that help me figure out what I'm doing wrong. The functionality seems very basic, but nothing I've tried thus far has given me any meaningful output or error.
I have a PRELOAD command for a document, and this works fine as long as the document has already been created.
set(variable_name
PRELOAD ${_source_directory}/Documents/output.txt AS output.txt
)
But I want the document generation(which is accomplished via a python script) to be part of the cmake build process as well. The command I want to run is
python_script.py ${_source_directory}/Documents/input.txt
${_source_directory}/Documents/output.txt
and I want that to run before the PRELOAD statement is executed.
Here's an example of what I've tried
add_custom_command(
OUTPUT ${_source_directory}/Documents/output.txt
COMMAND python_script.py ${_source_directory}/Documents/input.txt
${_source_directory}/Documents/output.txt
)
set(variable_name
PRELOAD ${_source_directory}/Documents/output.txt AS output.txt
)
But that gives me the same error as if the add_custom_command wasn't even there ("No rule to make target ${_source_directory}/Documents/output.txt").
You do/understand it wrong. As it was mentioned in comments set() has nothing like PRELOAD.
The correct way is to use add_custom_target() which would produce an output.txt in desired directory and then add_dependencies() for target you want to build and which would use the output.txt.
First of all I should point out I'm new to Atlassian's Bamboo and continuous integration in general. This is the first project where I've used either.
I've created a raft of unit tests using the tSQLt framework. I've also configured Bamboo to:
Get a fresh copy of the repository from BitBucket
Drop & re-create the build DB
Use Red-Gate SQL Compare to deploy the DB objects from source to the build DB
Run the tSQLt tests
Output the results of the tests in XML format to a file called TestResults.xml
I've checked and can confirm that the TestResults.xml file is created.
In Bamboo I then added a JUnit Parser task to consume the contents of this TestResults.xml file. However when that task runs it returns this error:
Failed to parse test result file
At first I thought it might have meant that Bamboo could not find the file. I changed the task that created the results file to output a file called TestResults2.xml. When I did that the JUnit Parser returned this error:
Failing task since test cases were expected but none were found.
So I'm assuming that the first error message means Bamboo is finding the file, it just can't parse the file.
I have no idea where to start working out what exactly is the problem. Has anyone got any ideas?
I had a similar problem, but turned out to be weird behavior from bamboo needing file stamps being modified to have visibility of the JUnit file.
In Windows enviornment you just need to add "script task" before the "JUnit task"
powershell (ls *.xml).LastWriteTime = Get-Date
Reference
https://jira.atlassian.com/browse/BAM-12768
I have had several cases of this and was able to fix it by removing single quotes and greater than / less than characters from test names inside the *.rb file.
Example
test "make sure 'go_to_world' is removed from header and length < 23"
change to remove single quotes and < symbol
test "make sure go_to_world is removed from header and length less than 23"
Very common are contractions: "won't don't shouldn't", or possessives: "the vessel's data".
And also < or > characters.
I think there is a bug in the parser that just doesn't escape those characters in a test title appropriately.
The Visual Studio (2010) gui provides options for specifying second command variable file for target. I however cant find this option for the command line implementation - vsdbcmd.exe.
Running vsdbcmd deploy for dbschema to dbschema with only source model command variables given results that objects that implement the variables are treated as having changes. Resulting in incorrect(improper) update script.
The command i use currently:
vsdbcmd.exe /a:deploy /dd:- /dsp:sql /model:Source.dbschema /targetmodelfile:Target.dbschema /p:SqlCommandVariablesFile=Database.sqlcmdvars /manifest:Database.deploymanifest /DeploymentScriptFile:UpdateScript.sql /p:TargetDatabase="DatabaseName"
What im looking for is the /p:TargetSqlCommandVariablesFile, if such thing exists ...
The result script is the same as running so GUI compare without specifying the sqlcmd vars for target
I found what looks like full documentation for VSDBCMD.EXE at this link.
I think you may be looking for something like:
/p:SqlCommandVariablesFile=Filepath
In the end i found no info on the possibility to do what I required - checked vsdbcmd libs with IL spy for hidden parameters - didn't find any.
Reached my goal by parsing the dbschema files for both target and current and parsing the cmd variable values directly into them - then doing the compare on modified dbschemas. This approach no longer allows to change sql cmd vars in resulting script (as the values are already baked into code), however this was deemed as acceptable loss.
Not the most beautiful solution but so far i have had no issues with it.
When developing Pig scripts that use the STORE command I have to delete the output directory for every run or the script stops and offers:
2012-06-19 19:22:49,680 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 6000: Output Location Validation Failed for: 'hdfs://[server]/user/[user]/foo/bar More info to follow:
Output directory hdfs://[server]/user/[user]/foo/bar already exists
So I'm searching for an in-Pig solution to automatically remove the directory, also one that doesn't choke if the directory is non-existent at call time.
In the Pig Latin Reference I found the shell command invoker fs. Unfortunately the Pig script breaks whenever anything produces an error. So I can't use
fs -rmr foo/bar
(i. e. remove recursively) since it breaks if the directory doesn't exist. For a moment I thought I may use
fs -test -e foo/bar
which is a test and shouldn't break or so I thought. However, Pig again interpretes test's return code on a non-existing directory as a failure code and breaks.
There is a JIRA ticket for the Pig project addressing my problem and suggesting an optional parameter OVERWRITE or FORCE_WRITE for the STORE command. Anyway, I'm using Pig 0.8.1 out of necessity and there is no such parameter.
At last I found a solution on grokbase. Since finding the solution took too long I will reproduce it here and add to it.
Suppose you want to store your output using the statement
STORE Relation INTO 'foo/bar';
Then, in order to delete the directory, you can call at the start of the script
rmf foo/bar
No ";" or quotations required since it is a shell command.
I cannot reproduce it now but at some point in time I got an error message (something about missing files) where I can only assume that rmf interfered with map/reduce. So I recommend putting the call before any relation declaration. After SETs, REGISTERs and defaults should be fine.
Example:
SET mapred.fairscheduler.pool 'inhouse';
REGISTER /usr/lib/pig/contrib/piggybank/java/piggybank.jar;
%default name 'foobar'
rmf foo/bar
Rel = LOAD 'something.tsv';
STORE Rel INTO 'foo/bar';
Once you use the fs command, there a lot of ways to do this. For an individual file, I wound up adding this to the beginning of my scripts:
-- Delete file (won't work for output, which will be a directory
-- but will work for a file that gets copied or moved during the
-- the script.)
fs -touchz top_100
rm top_100
For a directory
-- Delete dir
fs -rm -r out