Deploy a database project build output (dbschema & co) to various databases - database-project

I am trying to build a database project, and then use the output along with VSDBCMD to deploy it to different databases. I have added different sqldeployment & sqlcmdvars files corresponding to the environments I aim, but I am not able to change the following variables, which seem to be readonly (DatabaseName, DefaultDataPath, DefaultLogPath). For example I would like to have QA and UA databases on the same server instance so I need my deployment script to work with different database names and files path. Moreover when I build my database project in TFS I cannot seem to find the specific sqlcmdvars & sqldeployment files (ex MyDatabase-QA.sqlcmdvars, MyDatabase-UA.sqlcmdvars) in my drop folder.
Am I doing something wrong? What other options do I have?
Thanks in advance!

A little late for you maybe, but after a bit of digging, if you want to set, for a example a custom file path for your MDF/LDF, I did the following.
Create a new var in the sqlcmdvars file.
use this var in the File sql file at as follows: DBProject\Schema Objects\Database Level Objects\Storage\Files
ALTER DATABASE [$(DatabaseName)]
ADD FILE (NAME = [MYDBNAME], FILENAME = '$(DataLocation)\MyDatabaseName.mdf', SIZE = 2304 KB, FILEGROWTH = 1024 KB) TO FILEGROUP [PRIMARY];

Related

How can I conditionally include large scripts in my ssdt post deployment script?

In our SSDT project we have a script that is huge and contains a lot of INSERT statements for importing data from an old system. Using sqlcmd variables, I'd like to be able to conditionally include the file into the post deployment script.
We're currently using the :r syntax which includes the script inline:
IF '$(ImportData)' = 'true'
BEGIN
:r .\Import\OldSystem.sql
END
This is a problem because the script is being included inline regardless of whether $(ImportData) is true or false and the file is so big that it's slowing the build down by about 15 minutes.
Is there another way to conditionally include this script file so it doesn't slow down the build?
Rather than muddy up my prior answer with another. There is a special case with a VERY simple option.
Create separate SQLCMD input files for each execution possibility.
The key here is to name the execution input files using the value of your control variable.
So, for example, your publish script defines variable 'Config' which may have one of these values: 'Dev','QA', or 'Prod'.
Create 3 post deployment scripts named 'DevPostDeploy.sql', 'QAPostDeploy.sql' and 'ProdPostDeploy.sql'.
Code your actual post deploy file like this:
:r ."\"$(Config)PostDeploy.sql
This is very much like the build event mechanism where you overwrite scripts with appropriate ones except you don't need a build event. But you are dependent upon naming your scripts very specifically.
The scripts referenced using :r are always included. You have a couple of options but I would first verify that if you take the script out it improves the performance to where you want it to get to.
The simplest approach is to just keep it outside of the whole build process and change your deploy process so it becomes a two step thing (deploy DAC then deploy script). The positives of this are you can do things outside of the ssdt process but the negatives are you don't get things like auto disabling of constraints on tables changing in the deployment.
The second way is to not include the script in the deploy when you build but create an AfterBuild msbuild task that adds the script as a post deploy script in the dacpac. The dacpac is a zip file so you can use the .net packaging Api to add a part called postdeploy.sql which will then be included in the deployment process.
Both of these ways mean you lose verification so you might want to keep it in a separate ssdt project which has a "same database" reference to your main project, it will slow down the build when it changes but should be quick the rest of the time.
Here is the way I had to do it.
1) Create a dummy post-deploy script.
2) Create build configurations in your project for each deploy scenario.
3) Use a pre-build event to determine which post deploy configuration to use.
You can either create separate scripts for each configuration or dynamically build the post-deploy script in your pre-build event. Either way you base what you do on the value of $(configuration) which always exists in a build event.
If you use separate static scripts, your build event only needs to copy the appropriate static file, overwriting the dummy post-deploy with whichever script is useful in that deploy scenario.
In my case I had to use dynamic generation because the decision about which scripts to include required knowing the current state of the database being deployed to. So I used the configuration variable to tell me which environment was being deployed to and then used an SQLCMD script with :OUT set to my Post-Deploy script location. Thus my pre-build script would then write the post-deploy script dynamically.
Either way, once build completed and the normal deploy process started the Post-Deploy script contained exactly the :r commands that I wanted.
Here's an example of the SQLCMD script I invoke in pre-build.
:OUT .\Script.DynamicPostDeployment.sql
PRINT ' /*';
PRINT ' DO NOT MANUALLY MODIFY THIS SCRIPT. ';
PRINT ' ';
PRINT ' It is overwritten during build. ';
PRINT ' Content IS based on the Configuration variable (Debug, Dev, Sit, UAT, Release...) ';
PRINT ' ';
PRINT ' Modify Script.PostDeployment.sql to effect changes in executable content. ';
PRINT ' */';
PRINT 'PRINT ''PostDeployment script starting at''+CAST(GETDATE() AS nvarchar)+'' with Configuration = $(Configuration)'';';
PRINT 'GO';
IF '$(Configuration)' IN ('Debug','Dev','Sit')
BEGIN
IF (SELECT IsNeeded FROM rESxStage.StageRebuildNeeded)=1
BEGIN
-- These get a GO statement after every file because most are really HUGE
PRINT 'PRINT ''ETL data was needed and started at''+CAST(GETDATE() AS nvarchar);';
PRINT ' ';
PRINT 'EXEC iESxETL.DeleteAllSchemaData ''pExternalETL'';';
PRINT 'GO';
PRINT ':r .\PopulateExternalData.sql ';
....
I ended up using a mixture of our build tool (Jenkins) and SSDT to accomplish this. This is what I did:
Added a build step to each environment-specific Jenkins job that writes to a text file. I either write a SQLCMD command that includes the import file or else I leave it blank depending on the build parameters the user chooses.
Include the new text file in the Post Deployment script via :r.
That's it! I also use this same approach to choose which pre and post deploy scripts to include in the project based on the application version, except that I grab the version number from the code and write it to the file using a pre-build event in VS instead of in the build tool. (I also added the text file name to .gitignore so it doesn't get committed)

Execute scripts by relative path in Oracle SQL Developer

First, this question relates to Oracle SQL Developer 3.2, not SQL*Plus or iSQL, etc. I've done a bunch of searching but haven't found a straight answer.
I have several collections of scripts that I'm trying to automate (and btw, my SQL experience is pretty basic and mostly MS-based). The trouble I'm having is executing them by a relative path. for example, assume this setup:
scripts/A/runAll.sql
| /A1.sql
| /A2.sql
|
/B/runAll.sql
/B1.sql
/B2.sql
I would like to have a file scripts/runEverything.sql something like this:
##/A/runAll.sql
##/B/runAll.sql
scripts/A/runAll.sql:
##/A1.sql
##/A2.sql
where "##", I gather, means relative path in SQL*Plus.
I've fooled around with making variables but without much luck. I have been able to do something similar using '&1' and passing in the root directory. I.e.:
scripts/runEverything.sql:
#'&1/A/runAll.sql' '&1/A'
#'&1/B/runAll.sql' '&1/B'
and call it by executing this:
#'c:/.../scripts/runEverything.sql' 'c:/.../scripts'
But the problem here has been that B/runAll.sql gets called with the path: c:/.../scripts/A/B.
So, is it possible with SQL Developer to make nested calls, and how?
This approach has two components:
-Set-up the active SQL Developer worksheet's folder as the default directory.
-Open a driver script, e.g. runAll.sql, (which then changes the default directory to the active working directory), and use relative paths within the runAll.sql script to call sibling scripts.
Set-up your scripts default folder. On the SQL Developer toolbar, Use this navigation:
Tools > Preferences
In the preference dialog box, navigate to Database > Worksheet > Select default path to look for scripts.
Enter the default path to look for scripts as the active working directory:
"${file.dir}"
Create a script file and place all scripts associated in it:
runAll.sql
A1.sql
A2.sql
The content of runAll.sql would include:
#A1.sql;
#A2.sql;
To test this approach, in SQL Developer, click on File and navigate and open the script\runAll.sql file.
Next, select all (on the worksheet), and execute.
Through the act of navigating and opening the runAll.sql worksheet, the default file folder becomes "script".
I don't have access to SQL Developer right now so i can't experiment with the relative paths, but with the substitution variables I believe the problem you're seeing is that the positional variables (i.e. &1) are redefined by each start or #. So after your first #runAll, the parent script sees the same &1 that the last child saw, which now includes the /A.
You can avoid that by defining your own variable in the master script:
define path=&1
#'&path/A/runAll.sql' '&path/A'
#'&path/B/runAll.sql' '&path/B'
As long as runAll.sql, and anything that runs, does not also (re-define) path this should work, and you just need to choose a unique name if there is the risk of a clash.
Again I can't verify this but I'm sure I've done exactly this in the past...
you need to provide the path of the file as String , give the patch in double quote it will work
**
For Example
#"C:\Users\Arpan Saini\Zions R2\Reports Statements and Notices\Patch\08312017_Patch_16.2.3.17\DB Scripts\snsp.sql";
**
Execution of Sql
#yourPath\yourFileName.sql
How to pass parameters in file
#A1.sql; (Parameter)
#A2.sql; (Parameter)
This is not absolute or relative path issue. It's the SQL interpreter issue, where by default it will look for files which are having .sql extention.
Please make sure to modify the file name to file_name.sql
Ex: if workspace is having file name called "A", then move the file from A to "A.sql"

How to split sql in MAC OSX?

Is there any app for mac to split sql files or even script?
I have a large files which i have to upload it to hosting that doesn't support files over 8 MB.
*I don't have SSH access
You can use this : http://www.ozerov.de/bigdump/
Or
Use this command to split the sql file
split -l 5000 ./path/to/mysqldump.sql ./mysqldump/dbpart-
The split command takes a file and breaks it into multiple files. The -l 5000 part tells it to split the file every five thousand lines. The next bit is the path to your file, and the next part is the path you want to save the output to. Files will be saved as whatever filename you specify (e.g. “dbpart-”) with an alphabetical letter combination appended.
Now you should be able to import your files one at a time through phpMyAdmin without issue.
More info http://www.webmaster-source.com/2011/09/26/how-to-import-a-very-large-sql-dump-with-phpmyadmin/
This tool should do the trick: MySQLDumpSplitter
It's free and open source.
Unlike the accepted answer to this question, this app will always keep extended inserts intact so the precise form of your query doesn't matter; the resulting files will always have valid SQL syntax.
Full disclosure: I am a share holder of the company that hosts this program.
The UploadDir feature in phpMyAdmin could help you, if you have FTP access and can modify your phpMyAdmin's configuration (or are allowed to install your own instance of phpMyAdmin).
http://docs.phpmyadmin.net/en/latest/config.html?highlight=uploaddir#cfg_UploadDir
You can split into working SQL statements with:
csplit -s -f db-part db.sql "/^# Dump of table/" "{99}"
Which makes up to 99 files named 'db-part[n]' from db.sql
You can use "CREATE TABLE" or "INSERT INTO" instead of "# Dump of ..."
Also: Avoid installing any programs or uploading your data into any online service. You don't know what will be done with your information!

Rename a file at runtime with vb.net

i try to rename a file using vb.net in this way:
my.computer.filesyste.rename(oldname,newname)
But if i use a software to recover files deleted, i find a file named :"_ldname", and if i recovery the file "_ldname" i have, in this way, two files equals.
Can i do this without have a duplicate of my file?
Best regards
Sebastiano
You cannot, this is a windows filesystem limitation and nothing to do with programming. Two files cannot have the same name in the same location.
The recovery software should be forcing a rename to Myfile(1).txt or something like that to distinguish between the two files.
You could always use:
If File.Exists(path) = False Then
To make sure the file doesn't already exists. Then if it does exist you could add a "(1)" to the file name.

Edit the control file in Oracle 10g Release 2

I tried to clone an oracle database server to another oracle database server.
After I completed the cloning, when I tried connecting to the database by starting SQL Plus
I got the following errors:
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/home/oracle/oradata/ccisv2/system01.dbf'
I found that while cloning the control file of the original database location also got cloned.
Now in the new server I have the data files located at a different location. and that is not affected in the control file, which is the reason for the error.
In short I need to change the above path
/home/oracle/oradata/ccisv2/
to a new path
/home2/oracle/oradata/ccisv2/
I am not sure how can I change the control file and edit the path of the data file location.
Changing of the location of datafiles is not possible as I have less space in
/home/oracle/oradata/..
Can some one help me with this one...
You'll need to mount the database (not open it) and re-create the controlfile, renaming the data files in the process (see the CREATE CONTROLFILE command):
STARTUP MOUNT;
CREATE CONTROLFILE REUSE SET DATABASE "ORCL" RESETLOGS
MAXLOGFILES NN
MAXLOGMEMBERS N
MAXDATAFILES 254
MAXINSTANCES 1
MAXLOGHISTORY 1815
LOGFILE
GROUP 1 '/home2/oracle/oradata/ccisv2/REDO01.LOG' SIZE 56M,
GROUP 2 '/home2/oracle/oradata/ccisv2/REDO02.LOG' SIZE 56M,
GROUP 3 '/home2/oracle/oradata/ccisv2/REDO03.LOG' SIZE 56M
DATAFILE
'/home2/oracle/oradata/ccisv2/SYSTEM.DBF',
'/home2/oracle/oradata/ccisv2/USERS.DBF',
'/home2/oracle/oradata/ccisv2/sysaux.DBF',
'/home2/oracle/oradata/ccisv2/TOOLS.DBF',
etc...
CHARACTER SET WE8ISO8859P1;
ALTER DATABASE OPEN RESETLOGS;
QUIT;
All of your database files need to be re-identified in the controlfile with their new location.
Easiest is to just rename the datafiles to the new locations:
startup mount;
alter database rename file '/home/oracle/oradata/ccisv2/system01.dbf' to '/home2/oracle/oradata/ccisv2/system01.dbf';
and do this for all your files.
Normally we would use rman duplicate and use the file_name convert to do this for us.
re-creating the controlfile is also an option, renaming the files is easier.