First of all I should point out I'm new to Atlassian's Bamboo and continuous integration in general. This is the first project where I've used either.
I've created a raft of unit tests using the tSQLt framework. I've also configured Bamboo to:
Get a fresh copy of the repository from BitBucket
Drop & re-create the build DB
Use Red-Gate SQL Compare to deploy the DB objects from source to the build DB
Run the tSQLt tests
Output the results of the tests in XML format to a file called TestResults.xml
I've checked and can confirm that the TestResults.xml file is created.
In Bamboo I then added a JUnit Parser task to consume the contents of this TestResults.xml file. However when that task runs it returns this error:
Failed to parse test result file
At first I thought it might have meant that Bamboo could not find the file. I changed the task that created the results file to output a file called TestResults2.xml. When I did that the JUnit Parser returned this error:
Failing task since test cases were expected but none were found.
So I'm assuming that the first error message means Bamboo is finding the file, it just can't parse the file.
I have no idea where to start working out what exactly is the problem. Has anyone got any ideas?
I had a similar problem, but turned out to be weird behavior from bamboo needing file stamps being modified to have visibility of the JUnit file.
In Windows enviornment you just need to add "script task" before the "JUnit task"
powershell (ls *.xml).LastWriteTime = Get-Date
Reference
https://jira.atlassian.com/browse/BAM-12768
I have had several cases of this and was able to fix it by removing single quotes and greater than / less than characters from test names inside the *.rb file.
Example
test "make sure 'go_to_world' is removed from header and length < 23"
change to remove single quotes and < symbol
test "make sure go_to_world is removed from header and length less than 23"
Very common are contractions: "won't don't shouldn't", or possessives: "the vessel's data".
And also < or > characters.
I think there is a bug in the parser that just doesn't escape those characters in a test title appropriately.
Related
I have a jmx script which saves the results to a CSV file.
I need to see the 'failureMessage' field in the CSV especially when the 'success' column says 'false' as in the below example. But the failureMessage column always appear as blank irrespective in the csv
Example -
timeStamp|time|label|responseCode|threadName|dataType|success|failureMessage
02/06/03 08:21:42|1187|Home|200|Thread Group-1|text|true|
02/06/03 08:21:42|47|Login|200|Thread Group-1|text|false|Test Failed: expected to contain: password etc.
I tried looking up the jmeter.properties file to check the below which is set to true. But it still doesn't save the message to failureMessage in the csv.
assertion_results_failure_message only affects CSV output
jmeter.save.saveservice.assertion_results_failure_message=true
I cannot reproduce your issue using:
Latest JMeter 5.2.1
With the default Results File Configuration
Running JMeter in command-line non-GUI mode
Demo:
If you cannot see custom assertion failure messages your setup violates at least one of the above 3 points.
Try adding assertions with your requests and you will find it in your results in case of assertions getting failed.
I'm asking for help on how to run a feature file scenario just by name. I've been trying for a while and it does not come out. I know that can be done by tags or by line number, but I wonder if we can run a cucumber test by name, more or less with this nomenclature.
Given a file named "features/test.feature" with:
Feature:
Scenario: My first scenario
Given this step is blah blah blah
Scenario: My second scenario
Given this step too blah blah
I want to run a scenario by name from the console or with gradle, maybe similar this way
cucumber features/test.feuture::My second scenario
Or maybe with gradle
./gradlew cucumber::My second scenario
You didn't describe how you start cucumber so I can't help you with that.
When used from the CLI accepts --name REGEXP. This will only run scenarios whose names match REGEXP.
The #CucumberOptions annotation accepts name="REGEXP".
Cucumber < v6.0.0 looks at the environment. For maven you can add -Dcucumber.options=--name REGEXP. I don't know the equivalent for gradle. Take note that the escape characters maybe shell/build system dependent.
Cucumber v6.0.0 and above looks at the environment. For maven you can add -Dcucumber.filter.name="REGEXP".
See:
https://cucumber.io/docs/reference/jvm#running
https://github.com/cucumber/cucumber-jvm/tree/main/core
From cucumber 6.x, you can run a scenario with below CLI commands:
// Specify a scenario by its line number
$ cucumber-js features/my_feature.feature:3
// Specify a scenario by its name matching a regular expression
$ cucumber-js --name "topic 1"
But, these are time-consuming and repetitive. You can save a lot of time by using a dedicated VSCode Extension called Cucumber-Quick. This extension will allow you to run a scenario/feature just by right-clicking on them. It can save you from all the hustle.
You would call the scenario by its line number.
So assuming that the second scenario starts on line 5 in your feature file you could run:
cucumber features/test.feature:5
In our SSDT project we have a script that is huge and contains a lot of INSERT statements for importing data from an old system. Using sqlcmd variables, I'd like to be able to conditionally include the file into the post deployment script.
We're currently using the :r syntax which includes the script inline:
IF '$(ImportData)' = 'true'
BEGIN
:r .\Import\OldSystem.sql
END
This is a problem because the script is being included inline regardless of whether $(ImportData) is true or false and the file is so big that it's slowing the build down by about 15 minutes.
Is there another way to conditionally include this script file so it doesn't slow down the build?
Rather than muddy up my prior answer with another. There is a special case with a VERY simple option.
Create separate SQLCMD input files for each execution possibility.
The key here is to name the execution input files using the value of your control variable.
So, for example, your publish script defines variable 'Config' which may have one of these values: 'Dev','QA', or 'Prod'.
Create 3 post deployment scripts named 'DevPostDeploy.sql', 'QAPostDeploy.sql' and 'ProdPostDeploy.sql'.
Code your actual post deploy file like this:
:r ."\"$(Config)PostDeploy.sql
This is very much like the build event mechanism where you overwrite scripts with appropriate ones except you don't need a build event. But you are dependent upon naming your scripts very specifically.
The scripts referenced using :r are always included. You have a couple of options but I would first verify that if you take the script out it improves the performance to where you want it to get to.
The simplest approach is to just keep it outside of the whole build process and change your deploy process so it becomes a two step thing (deploy DAC then deploy script). The positives of this are you can do things outside of the ssdt process but the negatives are you don't get things like auto disabling of constraints on tables changing in the deployment.
The second way is to not include the script in the deploy when you build but create an AfterBuild msbuild task that adds the script as a post deploy script in the dacpac. The dacpac is a zip file so you can use the .net packaging Api to add a part called postdeploy.sql which will then be included in the deployment process.
Both of these ways mean you lose verification so you might want to keep it in a separate ssdt project which has a "same database" reference to your main project, it will slow down the build when it changes but should be quick the rest of the time.
Here is the way I had to do it.
1) Create a dummy post-deploy script.
2) Create build configurations in your project for each deploy scenario.
3) Use a pre-build event to determine which post deploy configuration to use.
You can either create separate scripts for each configuration or dynamically build the post-deploy script in your pre-build event. Either way you base what you do on the value of $(configuration) which always exists in a build event.
If you use separate static scripts, your build event only needs to copy the appropriate static file, overwriting the dummy post-deploy with whichever script is useful in that deploy scenario.
In my case I had to use dynamic generation because the decision about which scripts to include required knowing the current state of the database being deployed to. So I used the configuration variable to tell me which environment was being deployed to and then used an SQLCMD script with :OUT set to my Post-Deploy script location. Thus my pre-build script would then write the post-deploy script dynamically.
Either way, once build completed and the normal deploy process started the Post-Deploy script contained exactly the :r commands that I wanted.
Here's an example of the SQLCMD script I invoke in pre-build.
:OUT .\Script.DynamicPostDeployment.sql
PRINT ' /*';
PRINT ' DO NOT MANUALLY MODIFY THIS SCRIPT. ';
PRINT ' ';
PRINT ' It is overwritten during build. ';
PRINT ' Content IS based on the Configuration variable (Debug, Dev, Sit, UAT, Release...) ';
PRINT ' ';
PRINT ' Modify Script.PostDeployment.sql to effect changes in executable content. ';
PRINT ' */';
PRINT 'PRINT ''PostDeployment script starting at''+CAST(GETDATE() AS nvarchar)+'' with Configuration = $(Configuration)'';';
PRINT 'GO';
IF '$(Configuration)' IN ('Debug','Dev','Sit')
BEGIN
IF (SELECT IsNeeded FROM rESxStage.StageRebuildNeeded)=1
BEGIN
-- These get a GO statement after every file because most are really HUGE
PRINT 'PRINT ''ETL data was needed and started at''+CAST(GETDATE() AS nvarchar);';
PRINT ' ';
PRINT 'EXEC iESxETL.DeleteAllSchemaData ''pExternalETL'';';
PRINT 'GO';
PRINT ':r .\PopulateExternalData.sql ';
....
I ended up using a mixture of our build tool (Jenkins) and SSDT to accomplish this. This is what I did:
Added a build step to each environment-specific Jenkins job that writes to a text file. I either write a SQLCMD command that includes the import file or else I leave it blank depending on the build parameters the user chooses.
Include the new text file in the Post Deployment script via :r.
That's it! I also use this same approach to choose which pre and post deploy scripts to include in the project based on the application version, except that I grab the version number from the code and write it to the file using a pre-build event in VS instead of in the build tool. (I also added the text file name to .gitignore so it doesn't get committed)
I've got a visual studio 'web performance test' to run from the command line. The plan is to create a scheduled task to run this. How do i trigger an email on failure? Either I wire that logic up in the test itself or it's external and dependent on return code but i don't think there is a return value - i.e. failure is shown in output text or by checking the saved results file.
You can use the /resultsfile:[ file name ] option with mstest.exe to create a ".trx" file. Its contents is XML and it contains a section similar to:
<ResultSummary outcome="Completed">
<Counters total="1" executed="1" passed="1" error="0" failed="0"
timeout="0" aborted="0" inconclusive="0" passedButRunAborted="0"
notRunnable="0" notExecuted="0" disconnected="0" warning="0"
completed="0" inProgress="0" pending="0" />
</ResultSummary>
(Extra white space added for clarity).
It should be a simple matter to examine the TRX file after the run and send an email if anything failed.
The Visual Studio (2010) gui provides options for specifying second command variable file for target. I however cant find this option for the command line implementation - vsdbcmd.exe.
Running vsdbcmd deploy for dbschema to dbschema with only source model command variables given results that objects that implement the variables are treated as having changes. Resulting in incorrect(improper) update script.
The command i use currently:
vsdbcmd.exe /a:deploy /dd:- /dsp:sql /model:Source.dbschema /targetmodelfile:Target.dbschema /p:SqlCommandVariablesFile=Database.sqlcmdvars /manifest:Database.deploymanifest /DeploymentScriptFile:UpdateScript.sql /p:TargetDatabase="DatabaseName"
What im looking for is the /p:TargetSqlCommandVariablesFile, if such thing exists ...
The result script is the same as running so GUI compare without specifying the sqlcmd vars for target
I found what looks like full documentation for VSDBCMD.EXE at this link.
I think you may be looking for something like:
/p:SqlCommandVariablesFile=Filepath
In the end i found no info on the possibility to do what I required - checked vsdbcmd libs with IL spy for hidden parameters - didn't find any.
Reached my goal by parsing the dbschema files for both target and current and parsing the cmd variable values directly into them - then doing the compare on modified dbschemas. This approach no longer allows to change sql cmd vars in resulting script (as the values are already baked into code), however this was deemed as acceptable loss.
Not the most beautiful solution but so far i have had no issues with it.