I have built an SSIS Package for our HR Department.
The Data is manually downloaded from a web portal in .xls format.
The current process is not automated due to the inconsistent frequency of the data drop.
What I would like to do is alter the SSIS package so that it looks at the Source data (they just save over the existing file every month) and have the Job run as soon as the "Date Modified" has changed.
I am not familiar with C# so I would like to avoid this option if possible, I am still a little new to all of this. I am hoping that there is a Loop Container option or something.
*Additional data: The Table that is loaded is truncated before the load. I don't know if this will factor in or not but I wanted to put it out there.
There are few approaches to accomplish this. Basically you need to implement some sort of trigger mechanism to run package when data modified has changed in source data.
Create a Windows service that uses WMI to detect change in date modified and launch packages.
You can use create package with infinite loop and schedule it through Agent Job. But please note SSIS is resource intensive so you have to plan interval for loop accordingly.
Set up a Table and create a trigger on the table for insert, that should execute the agent job with the sp_start_job system stored procedure. You could use Service Broker to add a new item to a queue. Then the execution of the SSIS would not be anywhere near as problematic as calling it directly from a trigger.
Some useful articles -
Execute SSIS package when a file is arrived at folder
Trigger SSIS package when files available in a Folder
Trigger SSIS package when files available in a Folder part#2
How to Check IF File Exists In Folder [Script Task]
Trigger SSIS package
Related
I have an application that has been released for several years. I have updated the SQL Server database around 90 times and each time I have created an update SQL script, so when the user runs the new software version the database is updated.
I would like to amalgamate all the updates into a single SQL script to make deployment faster and simpler, is there an easy way to do this? I presume I could simply grab the latest database after it has run through all 90 updates via SQL Server Management Studio?
EDIT
For clarification; the software I wrote automatically applies new database updates when the user downloads the latest version. This is done via C# / .Net and look for an embedded sql script on startup in the format XX_update.sql calling each script one by one i.e.
1_update.sql - this creates the tables and initial data etc. This was my initial release database.
2_update.sql - updates to the initial database such as adding a new SP or changing column datatype etc
3_update.sql
4_update.sql
...
90_update.sql (4 years and lots of program updates later!).
Ideally, I would install my software and create a brand new database running through all 90 update scripts. Then take this database and convert it into a script which I can replace all 90 scripts above.
This is too long for a comment.
There is no "easy" way to do this. It depends on what the individual updates are doing.
There is a process you can follow in the future, though. Whenever an update occurs, you should maintain scripts both for incremental updates and for complete updates. You might also want to periodically introduce major versions, and be able to upgrade to and from those.
In the meantime, you'll need to build the complete update by walking through the individual ones.
I use a similar system at work and while I prefer to run the scripts separately I have amalgamated several scripts sometimes when they have to be deployed by another person with no problem.
In SQL Server the rule is that as long as you separate the scripts by go and use SQL Server Management Studio or another tool that process the batch separator properly there is no problem in amalgamating it, because it would look like separate scripts to SQL Server (instead of being sent to SQL Server as a one big script the tool send it in batches using the go as the batch separator).
The only problem is that if you have an error in the middle of the script, the tool would continue sending the rest of batches to the server (at least by default, maybe there is an option for changing this). That is why I prefer when possible using a tool to run then separately, just to err on the safer side and stop if there is a problem and locate easily the problematic command.
Also, for new deployments your best option is what you say of using a database that is already updated instead of using the original one and apply all the updates. And to prevent being in this situation again you can keep a empty template database that is not used for any testing and update it when you update the rest of databases.
Are you running your updates manaually on the server. Instead, you can create a .bat file, powershell script or a exe. Update scripts in your folder can be numbered. The script can just pick up updates one by one and execute it over the database connection instead of you doing it.
If you have multiple script files and you want them to combine into one,
rename these as txt, combine, rename the resulting file as sql.
https://www.online-tech-tips.com/free-software-downloads/combine-text-files/
This is easy. Download SQL Server Data Tools and use that to create the deployment script. If there's no existing database, it will create all the objects, and if targeting an older version it will perform a diff against the target database and create scripts that create the missing objects, and alter the existing ones.
You can also generate scripts for the current version using SSMS, or use SSMS to take a backup, and use a restore to in an install.
I have a Test database which is overwritten each week by a fresh new production copy.
But we have changes in our Test environment which I script in manually each Monday morning after the copy is created.
Is there a way to schedule script code to run which can generate my objects and data changes eg new stored procedures etc.
The Job scheduler in SQL Server can import a SQL script, but it's not dynamic I need something that I can use in future where it will read in the script each time before it's run and pick up any changes.
I suggest you create a SSIS package and use SMO inside a script component to generate DDL.
This link may help you a little bit.
Using SMO is very easy and straight forward
SMO tutorial
Similar to this post
I have an SSIS Package with a Script Task that creates an Excel file on disk and populates it with data from a SQL Stored Procedure (using Microsoft.Office.Interop.Excel). This works great when testing and when running the deployed package manually through the SSIS Catalog, but when I schedule the task to run automatically through SQL Server Agent, the Package fails in the Script Task step. I have the Job running as a Proxy account that is the same as the account I'm logged into the server with when testing (and the same as the account that works when manually running the packages).
My understanding is that even though the job is running using a Proxy, any desktop interaction occurs within the Profile context of the SQL Server Agent login. Since that profile isn't actively logged in, the interaction fails. Digging in more, there is a bool System Variable in the package called "InteractiveMode" that is set to "False". I have a feeling that if I could switch that to True, everything would be hunky dorey. Trouble is, that variable is only accessible to my Script Task as "ReadOnly"...
Is there any way to set the System:InteractiveMode Variable in an SSIS package manually or programatically at runtime? Please help! I'm having to run these scheduled jobs manually for now, which is a big pain.
Thanks.
I had this problem a few months ago and it turned out that the execution options needed to be set to use 32 bit runtime. If you're using SQL Server 2008 R2, you can open your job and double click on the step. It's under the Execution Options tab.
If you continue to have errors, you may want to consider changing the package so that it uses a file system task to create/rename the excel document and then a Data Flow Task to move the data from your stored procedure to your excel document. Depending on your data, you may need to add a Data Conversion step in between. Here's a good article on the topic: http://www.mssqltips.com/sqlservertip/3046/sql-server-integration-services-data-type-conversion-testing/
Edit:
I haven't used SQL Server 2012 yet, but according to MSDN, it looks like the option is under the Configuration tab. Here's their article: http://msdn.microsoft.com/en-us/library/gg471507(v=sql.110).aspx
Our company is in the process of adapting TFS for source repository and project management. I am in charge of database part of the project. We are using SQL Server 2008 R2, Visual Studio 2012 and TFS Online. We have a database that is used by several of our applications. So far I have been the only one handling any change to this database. As the company is expending we are going to have multiple dev teams. So I am planning to save the database as as SSDT project to TFS.
At the moment I am maintaining my database like the following:
I have separate folders for UDFs, Stored Procedures, and Config.
Under these folders I have subfolders for each objects. For example, for stored procedures I have subfolders for each stored procedure which contains the SQL script to create the SP. The config folder contains any script similar to SSDT's post deployment script (for example, populating static data).
The SQL script contains code to drop the procedure and create it.
I have a c# app to concatenate all the SQL files into one single SQL file. Let's call it the FINAL script. When creating FINAL script I can specify version number which adds an update statement to update the version table on the database.
FINAL script is made available for customers to download and execute on the database. So the script mainly contains any add/edit to SPs, UDFs, and static data. It does not touch any existing data (data entered by user) in most cases.
As a newbie to TFS and SSDT I am not exactly sure how this can be done using SSDT/TFS or if there is better way of doing something similar. So far what I have understood about SSDT and TFS is:
I can import an existing database to SSDT project.
This will create scripts for all objects including tables.
I can easily do a publish of the database to a local server or to a server I have access to.
Things that seem confusing so far:
How do I supply clients with my latest update script? I am thinking of manually including the FINAL script to the SSDT project but there must be better way of doing it.
How do I publish the changes to a copy of the database without the loss of any user-entered data? My guess is when publishing the tables get created. I can take care of the static data but I am not sure how to handle data entered by users.
May be there is something fundamentally wrong in my understanding of this whole thing. That is why I am here... :)
You want to pull your DB into a SQL Project. Maintain all of your changes there. This tells your system what the schema of your database should be. From there, I'd generate the dacpac files (through building the project) and provide those to your clients along with having them install the SSDT tools that include SQLPackage. They can run SQLPackage to make changes to their database to handle the schema changes automatically. This will bring their database in line with your schema, no matter how far off it might be.
I'd also create a publish profile for them to use. This lets you control some of the settings.
You can choose to not drop any objects not in your project
You can choose to ignore users/permissions
You can set an option to not allow changes if there would be data loss.
You can wrap everything in a transaction so a failed update rolls back
If you give them a batch file to run, you can specify an output file or a Diff report, or have them generate their own script to do the update.
I blogged about this at http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
(or http://schottsql.blogspot.com/search/label/SSDT if that doesn't work well). That will take you through some basics of why you might want to use SQL Projects, creating them, maintaining them, and publishing the changes to an existing database.
I have a data dump that I manually initiate and I want to automate things now that they are working well. I have a system that exports data into Excel that I ultimately want to import into a SQL table.
I have a ssis package that I used for the import and saved it for re-use later. I just manually ran it and it works well. Now I would like to have it run either when invoked by a file watcher or schedule or some thing so that all I need to do is over-write the excel file and have it trigger the ssis to run its import.
Any ideas on how to make this happen?
SQL Server does its scheduling with SQL Agent, so try creating a schedule in that to do what you want.