Modify Bamboo variable via batch file - variables

I'm trying to set a bamboo global variable in a script contained in a batch file. Here is the batch file:
#echo off
echo Initial Date: %bamboo_releaseDate%
for /f "tokens=1-4 delims=/ " %%i in ("%date%") do (
set dow=%%i
set month=%%j
set day=%%k
set year=%%l
)
set mydate=%year%%month%%day%
echo %mydate:~2,6%
set bamboo_releaseDate=%mydate:~2,6%
echo Set up date: %bamboo_releaseDate%
And here is my output:
Initial Date: 140617
140619
Set up date: 140619
However, as soon as the script is run, Bamboo puts back the old value. Is there a way to avoid that? How would you suggest to do it?
My goal is to have one folder per nightly build with the date in the name of the folder. I use therefore the standard 'Artifact download' from Bamboo and give some parameters for the name of the containing folder.

Depending on your version of Bamboo, you could use ${system.} to store values for later use. More details here: https://confluence.atlassian.com/display/BAMBOO/Bamboo+variables
Is there a reason you want to over-write the bamboo.releasedate value? If you have builds that start on the evening and finish in the morning, you could pass an artifact to each successive build to get the release date. However, if you can run all your builds on the same day, change the start time and reset your batch file to build its own date and use that going forward (each time the build runs, it would regenerate the build date).

Related

Passing a variable between TFS 2018 Release Task and then use that variable in a gate step

I'm trying to use a variable that I have defined in a release Task and then pull that value into a Gate step. It's just a simple pipeline to demonstrate some integration with ServiceNow.
After a Build, a release is kicked off. The code is automatically copied to the "test" environment.
The Next step is a Power-Shell script that will create a Change Request in ServiceNow. I am able to create the CR and then parse the json that is returned to capture the Change Request Number. I use the following command to save the CR Number in the last line of the Power-Shell:
Write-Host "##vso[task.setvariable variable=crnumber]$crnumber
In the next step I use an extension to save the variables out to a JSON file.
Then for debugging I have a Power-Shell step that reads the CR variable.
It produces this output in the logs
2020-01-16T19:08:05.4137159Z Read variable after
2020-01-16T19:08:05.4138360Z --------------------1--------------------------
2020-01-16T19:08:05.4140620Z CHG0417736
2020-01-16T19:08:05.4141674Z --------------------2-------------------------
2020-01-16T19:08:05.4143096Z CHG0417736
2020-01-16T19:08:05.4144492Z --------------------3-------------------------
2020-01-16T19:08:05.5656992Z ##[section]Finishing: Read CR fron ENV
After that I set up Post Deployment conditions. 1 approval (me), and then another Extension that will query ServiceNow for the "Ready to Implement" State.
Note if I enter the CR Number (CHG0417736) is does work as expected. But I would like to use the $crnumber variable.
What am I missing here? Unfortunately I can't just add the Power-Shell Task here to try and debug the variable

Automating a command line with increasing file number

I am very new to creating batch files.
I have to run a command, with an increasing file number e.g
c:>program.bat -propertyfile "1.property"
Right now, I have to type the command manually, wait 1 minute, then type the command again by increasing the property file # i.e "2.property" "3.property" "4.property" etc....
I want to automate this, and still would like to see the results in the command prompt as it runs.
How can this be accomplished?
See https://ss64.com/nt/for.html and specifically https://ss64.com/nt/for_l.html
FOR /L %%G IN (1,1,4) DO program.bat -propertyfile "%%G.property"
Should run your command for files 1.property to 4.property but if you're actually running it for files in a directory rather than a list of integers one of the other FOR constructs might be more appropriate. Perhaps https://ss64.com/nt/for_r.html

How can I conditionally include large scripts in my ssdt post deployment script?

In our SSDT project we have a script that is huge and contains a lot of INSERT statements for importing data from an old system. Using sqlcmd variables, I'd like to be able to conditionally include the file into the post deployment script.
We're currently using the :r syntax which includes the script inline:
IF '$(ImportData)' = 'true'
BEGIN
:r .\Import\OldSystem.sql
END
This is a problem because the script is being included inline regardless of whether $(ImportData) is true or false and the file is so big that it's slowing the build down by about 15 minutes.
Is there another way to conditionally include this script file so it doesn't slow down the build?
Rather than muddy up my prior answer with another. There is a special case with a VERY simple option.
Create separate SQLCMD input files for each execution possibility.
The key here is to name the execution input files using the value of your control variable.
So, for example, your publish script defines variable 'Config' which may have one of these values: 'Dev','QA', or 'Prod'.
Create 3 post deployment scripts named 'DevPostDeploy.sql', 'QAPostDeploy.sql' and 'ProdPostDeploy.sql'.
Code your actual post deploy file like this:
:r ."\"$(Config)PostDeploy.sql
This is very much like the build event mechanism where you overwrite scripts with appropriate ones except you don't need a build event. But you are dependent upon naming your scripts very specifically.
The scripts referenced using :r are always included. You have a couple of options but I would first verify that if you take the script out it improves the performance to where you want it to get to.
The simplest approach is to just keep it outside of the whole build process and change your deploy process so it becomes a two step thing (deploy DAC then deploy script). The positives of this are you can do things outside of the ssdt process but the negatives are you don't get things like auto disabling of constraints on tables changing in the deployment.
The second way is to not include the script in the deploy when you build but create an AfterBuild msbuild task that adds the script as a post deploy script in the dacpac. The dacpac is a zip file so you can use the .net packaging Api to add a part called postdeploy.sql which will then be included in the deployment process.
Both of these ways mean you lose verification so you might want to keep it in a separate ssdt project which has a "same database" reference to your main project, it will slow down the build when it changes but should be quick the rest of the time.
Here is the way I had to do it.
1) Create a dummy post-deploy script.
2) Create build configurations in your project for each deploy scenario.
3) Use a pre-build event to determine which post deploy configuration to use.
You can either create separate scripts for each configuration or dynamically build the post-deploy script in your pre-build event. Either way you base what you do on the value of $(configuration) which always exists in a build event.
If you use separate static scripts, your build event only needs to copy the appropriate static file, overwriting the dummy post-deploy with whichever script is useful in that deploy scenario.
In my case I had to use dynamic generation because the decision about which scripts to include required knowing the current state of the database being deployed to. So I used the configuration variable to tell me which environment was being deployed to and then used an SQLCMD script with :OUT set to my Post-Deploy script location. Thus my pre-build script would then write the post-deploy script dynamically.
Either way, once build completed and the normal deploy process started the Post-Deploy script contained exactly the :r commands that I wanted.
Here's an example of the SQLCMD script I invoke in pre-build.
:OUT .\Script.DynamicPostDeployment.sql
PRINT ' /*';
PRINT ' DO NOT MANUALLY MODIFY THIS SCRIPT. ';
PRINT ' ';
PRINT ' It is overwritten during build. ';
PRINT ' Content IS based on the Configuration variable (Debug, Dev, Sit, UAT, Release...) ';
PRINT ' ';
PRINT ' Modify Script.PostDeployment.sql to effect changes in executable content. ';
PRINT ' */';
PRINT 'PRINT ''PostDeployment script starting at''+CAST(GETDATE() AS nvarchar)+'' with Configuration = $(Configuration)'';';
PRINT 'GO';
IF '$(Configuration)' IN ('Debug','Dev','Sit')
BEGIN
IF (SELECT IsNeeded FROM rESxStage.StageRebuildNeeded)=1
BEGIN
-- These get a GO statement after every file because most are really HUGE
PRINT 'PRINT ''ETL data was needed and started at''+CAST(GETDATE() AS nvarchar);';
PRINT ' ';
PRINT 'EXEC iESxETL.DeleteAllSchemaData ''pExternalETL'';';
PRINT 'GO';
PRINT ':r .\PopulateExternalData.sql ';
....
I ended up using a mixture of our build tool (Jenkins) and SSDT to accomplish this. This is what I did:
Added a build step to each environment-specific Jenkins job that writes to a text file. I either write a SQLCMD command that includes the import file or else I leave it blank depending on the build parameters the user chooses.
Include the new text file in the Post Deployment script via :r.
That's it! I also use this same approach to choose which pre and post deploy scripts to include in the project based on the application version, except that I grab the version number from the code and write it to the file using a pre-build event in VS instead of in the build tool. (I also added the text file name to .gitignore so it doesn't get committed)

Creating a bat file which executes SQL scripts

I have a folder into which a number of MSQL scripts get dropped into after each weekly sprint. For example, 10 scripts were placed into the folder today. I had to then open each script individually and run it against the applicable database. The database that it needs to be run against is in the name of the file.
e.g. [2] [CRMdata]UpdateProc.sql
The [2] represents the sequence in which it is run, so script [1] needs to be run before it.
[CRMdata] is the database I have to run it against.
This process is very tiresome, especially if there are 50 scripts to run sequentially.
I was wondering if there was an easier way to do this?
Perhpas a .bat file, which reads the filename, and executes the scripts sequentially based on the script number, as well as executing it against the database specified in the file name.
Any help is much appreciated.
Thanks.
First, when you need to run things, consider using SQL Server Job Agent. This is a good way to schedule simple things.
For a task like this, I would recommend PowerShell in combination with "sqlcmd". This command is actually the answer to your question, since it will run scripts from the command line.
However, go a step further. Schedule a job that runs once per week (or whenever you want it run). Have it consist of one step, a PowerShell script. This can then loop through all the scripts in the directory, extract the file name from the name, and run the script using sqlcmd. Along the way, also log what you are doing in a table so you can spot errors.
I don't know anything about executing SQL with MSQL. You will have to work out how to run each script against the proper database using whatever command-line utility is provided for MSQL.
I can help you with a batch file that will sort the SQL files in the correct sequence order, and parse out the name of the database.
The job is much easier in batch if the sequence numbers are zero prefixed to be a constant width. I'm assuming it is OK to rename the files, so that is what this solution does.
I also assumed you will never have more than 999 files to process. The code can easily be modified to handle more.
Some changes will have to be made if any file names contain the ! character because delayed expansion will corrupt the expansion of the FOR variables. But that is an unlikely problem.
#echo off
setlocal enableDelayedExpansion
:: Change the definition to point to the folder that contains the scripts
set "folder=sqlCodeFolder"
:: The mask will only match the pattern that you indicated in your question
set "mask=[*] [*]*.sql"
:: Rename the .sql files so that the sequence numbers are zero prefixed
:: to width of 3. This enables the default alpha sort of the directory to be
:: in the proper sequence
for /f "tokens=1* delims=[]" %%A in ('dir /b "%folder%\%mask%"') do (
set seq=00%%A
ren "%folder%\[%%A]%%B" "[!seq:~-3!]%%B"
)
::Process the renamed files in order
for %%F in ("%folder%\%mask%") do (
for /f "tokens=2 delims=[] " %%D in ("%%~nF") do (
rem %%F contains the full path to the sql file
rem %%D contains the name of the database, without enclosing []
rem Replace the echo line below with the proper command to run your script
echo run %%F against database [%%D]
)
)

batch scripting: how to get parent dir name without full path?

I'm working on a script that processes a folder and there is always one file in it I need to rename. The new name should be the parent directory name. How do I get this in a batch file? The full path to the dir is known.
It is not very clear how the script is supposed to become acquainted with the path in question, but the following example should at least give you an idea of how to proceed:
FOR %%D IN ("%CD%") DO SET "DirName=%%~nxD"
ECHO %DirName%
This script gets the path from the CD variable and extracts the name only from it to DirName.
You can use basename command:
FULLPATH=/the/full/path/is/known
JUSTTHENAME=$(basename "$FULLPATH")
You can use built-in bash tricks:
FULLPATH=/the/full/path/is/known
JUSTTHENAME=${FULLPATH##*/}
Explanations:
first # means 'remove the pattern from the begining'
second # means 'remove the longer possible pattern'
*/ is the pattern
Using built-in bash avoid to call an external command (i.e. basename) therefore this optimises you script. However the script is less portable.