How to include update query in azure devops release pipeline - sql

If we have added one new columns in a particular table and when we run the release pipeline in azure devops we need to execute an update query to update default value to the newly added column.
How can we do that?
How its possible using sql query file?
anybody can help??

Just use the SQL Server database deploy task
https://learn.microsoft.com/en-us/azure/devops/pipelines/targets/azure-sqldb?view=azure-devops&tabs=yaml
make sure your SQL script includes a check for if the column already exists or the job will fail

Related

Schedule a SQL-query to move data from one table to another with Azure SQL-db

I have a simple query that takes old data from a table and inserts the data into another table for archiving.
DELETE FROM Events
OUTPUT DELETED.*
INTO ArchiveEvents
WHERE GETDATE()-90 > Events.event_time
I want this query to run daily.
As i currently understand, there is no SQL Server Agent when using Azure SQL-db. Thus SQL Server agent does not seem like the solution here.
What is the easiest/best solution to this using Azure SQL-db?
There are multiple ways to run automated scripts on Azure SQL Database as below:
Using Automation Account Runbooks.
Using Elastic Database Jobs in Azure
Using Azure Data factory.
As you are running just one script, I would suggest you to take a look into Automation Account Runbooks. As an example below, a PowerShell Runbook to execute the statement.
$database = #{
'ServerInstance' = 'servername.database.windows.net'
'Database' = 'databasename'
'Username' = 'uname'
'Password' = 'password'
'Query' = 'DELETE FROM Events OUTPUT DELETED.* INTO archieveevents'
}
Invoke -Sqlcmd #database
Then, it can be scheduled as needed:
You asked in part for a comparison of Elastic Jobs to Runbooks.
Elastic Jobs will also run a pre-determined SQL script against a
target set of servers/databases.
-Elastic jobs were built
internally for Azure SQL by Azure SQL engineers, so the technology is
supported at the same level of Azure SQL.
Elastic jobs can be defined and managed entirely through PowerShell scripts. However, they also support setup/configuration through TSQL.
Elastic Jobs are handy if you want to target many databases, as you set up the job one time, and set the targets and it will run everywhere at once. If you have many databases on a given server that would be good targets, you only need to specify the target
server, and all of the databases on the server are automatically targeted.
If you are adding/removing databases from a given server, and want to have the job dynamically adjust to this change, elastic jobs
is designed to do this seamlessly. You just have to configure
the job to the server, and every time it is run it will target
all (non excluded) databases on the server.
For reference, I am a Microsoft Employee who works in this space.
I have written a walkthrough and fuller explanation of elastic jobs in a blog series. Here is a link to the entry point of the series:https://techcommunity.microsoft.com/t5/azure-sql/elastic-jobs-in-azure-sql-database-what-and-why/ba-p/1177902
You can use Azure data factory, create a pipeline to execute SQL query and trigger it run every day. Azure data factory is used to move and transform data from Azure SQL or other storage.

Trying to add a column (alter a table) through postdeployment scripts and trying to add data into the newly created column throwing error

We use post-deployment scripts to maintain history of the data in the tables. I am trying to add a new column to the existing table through post deployment script. I have written a post-deployment script to add the new column and one more post deployment script to add data into the newly inserted column.I am trying to publish my database then I can see my alter table script before adding data but it throws an error 'Invalid column name 'NewlyAddedColumn'' My question can we alter the schema using post deployment scripts? I tried using Commit command after altering the table in the post deployment but still encountered the same error message. I am running the post-deployment script to add new column before accessing it to insert data.Could some one help me with this issue.
Yes you can. Right-click on the post deployment script, go to the properties and set Build = none. But why do you want to add column in the post script? Why don't you want to add it to the project?

How to Export data to Excel in SQL Server using SQL Jobs

I need to export the data from a particular table in my database to Excel files (.xls/.xlsx) that will be located into a shared folder into my network. Now the situation is like this -
I need to use SQL SERVER Agent Jobs.
2.I need to generate a new excel file in every 2 minutes that will contain the refreshed data.
I am using sql server 2008 that doesn't include BI development studio. I'm clueless how to solve this situation. First, I'm not sure how to export the data using jobs because every possible ways I tried had some issues with the OLEDB connection. The 'sp_makewebtask' is also not available in SQL 2008. And I'm also confused how to dynamically generate the names of the files.
Any reference or solution will be helpful.
Follow the steps given below :
1) Make a stored procedure that creates a temporary table and insert records to it.
2) Make a stored procedure that read records from that temporary table and writes to file. You can use this link : clickhere
3) Create an SQL-job that execute step 1 and step 2 sequentially.
I found a better way out. I have created a SSIS(SQL Server Integration Services) package to automate the whole Export to Excel task. Then I deployed that package using SQL Server Agent Jobs. This is a more neat and clean solution as I found.

SSIS - Extract multiple databases based on lookup table

How do I create a package that extracts multiple databases(and all tables in each database) from another server based on a lookup table (also found on the other server) that contains a column where all the databases I need to extract is listed ?
I need to use the lookup table because new databases is created from time to time on the source, so I cannot just create a job that extracts a "static set" of databases to a destination. It needs to be a bit dynamic...
Furthermore I also need to extract the databases incremental where I can use a timestamp that exists in all databases/tables.
Im new to SSIS, so an "easy" guide would be appriciated.
Thanks
As a rough idea, you could work with SSIS Package Configurations and executing packages from within packages, and then use the Transfer SQL Server Objects Task:
Make a "Main package" that iterates over the column in your lookup table.
For each entry, it should UPDATE the Package Config entry of your second SSIS package accordingly. Use the "SQL Server" configuration for that second package.
The Main package should then execute the second package - there is a also a task for this.
The second package looks at its config to find out which server to get databases from and uses the Transfer SQL Server Objects Task to do so.
then the Main package continues with the next entry from your lookup table.
Ideally you would want to have your "second SSIS package" inside SQL Server's MSDB rather than the file system. Its easier to execute.

Move data from one table to another every night SQL server

I have this scenario i have a staging table that contains all the record imported from a XML file .Now i want to move this data based on verification like if the record is already in the other table update the record other wise insert the new record. i want to create a job or scheduler in SQL Server that do this for me every night without using any SSIS packages.
Have you tried using the MERGE statement?
SSIS really is an easy way to go with something like this, but if necessary, you can set up a a SQL server agent job. Take a look at this MSDN Article. Basically, write your validation code in a stored procedure, then create a job with a TSQL job step which calls that stored procedure.