I have a SQL Server 2008 R2 database with an orphaned index, so I'm thinking I need to create a new database and move all the objects over to it.
I've scripted the creation of all the tables and stored procedures, etc, and now I'm to the point to move the data. There are roughly 8000 tables, so I've used the Export Data Wizard to create four SSIS packages (transferring about 2k tables each).
My problem is that many of the tables contain a rowversion column, which causes errors when I open the projects in BIDS. If the problem field has the same name in every table, is there some way that I can do a bulk edit so the project ignores any column with this name? Or am I left with having to manually edit every table with an error in the project? Also, if there's a more efficient way to do this, I'm all ears.
Thanks in advance...
I should mention at the start that directly editing your .dtsx file can easily corrupt the SSIS package.
The four SSIS packages should each have a .dtsx file. It is technically feasible to open the .dtsx in a text editor. Maybe this can work?
1) Create a copy of your SSIS packages
2) Using BIDS, open one package and make one change
3) Look at the before and after images of your package
If you are lucky, perhaps a simple find and replace all can solve it. For a text editor, I recommend Sublime Text.
This post gives advice on editing .dtsx file
Related
I am new to Database and SSIS. Can anyone please let me know is there a way to look or view SQL code generated by SSIS transformations.
I know in BI reporting tools such as Business Objects, when we pull fields or columns into the reporting panel, we can view its corresponding SQL.
Similarly in SSIS, is there any option to view the SQL for SSIS Transformations.
Thanks in Advance
Raj
SSIS unlike other tools does not generate SQL per se, although you can include your own SQL inside tasks and components, but I guess you are not interested in the SQL that you write yourself but rather what SSIS is doing behind the scenes.
An SSIS package is essentially an XML-structured file with a collection of properties marking up the flow and process of its components. You can access to this xml file by right clicking on the package and selecting View Code:
The example above is an empty package so it's a very small XML file. In a complex package, this file can be very large as you will see all the tasks, components, parameters, variables, etc. as well as your own SQL code and C#/VB scripts if any.
When the project is built it generates a .ispac file which is no other thing that a zip file containing the package(s) in project plus a manifest, a content type and any other file required for the package to be deployed and executed.
You can see what is inside a .ispac by renaming to .zip and opening it. In this example I've built the above empty package and renamed the ispac to zip, then opened it :
In summary, unlike other tools that are purely SQL generators, in SSIS there is not much you can see about the generated code, all you can see is its structure as shown above.
Also, as mentioned by Marko Ivkovic in comments it might be possible to get some more info about what is happening at run time by using tools like SQL profiler.
I am working in a system which gets reloaded frequently. When the system is reloaded a lot of files can be deleted or changed back to a previous stable build version (it's a dev environment). The only thing that doesn't get reloaded is a database.
I have been tasked with inserting and deleting .sql scripts into the environments so that if required someone can run them at a given time. Usually 2 weeks after the files have been added to the system.
Initially I had thought of creating a directory to keep the files, and make a table where one of the columns keeps the path to these .sql files. This would allow for someone to query the database and using the path they could execute a desired script. The problem is the environment constantly reloading which could result in losing files.
Am I correct in assuming that the correct way to approach this is to use BLOB datatype to store the .sql files? The database isn't reloaded so no files would be lost. I am new to SQL and I'm unsure what is the correct approach to this. Would Varchar also work? Is one datatype more efficient than the other?
I wouldn't put SQL scripts in a SQL database, it doesn't feel right.
Sql Scripts usually go in a git repo, in order to be versioned and have better visibility.
I have like 20 tables and one general table in SQL. That main table has indexes in in its columns. Using these indexes I create a view by getting the data from other 20 tables.
My question would be what would be the most efficient way to create a process of updating all of those tables accordingly using an Excel source. It should be future proof (new excel data being inputted once a month e.g.).
If it is a SSIS package how would it look, maybe you have any examples of something similar?
Thank you for the help.
I for one do not like SSIS. I find it a pain to troubleshoot, but for some tasks it's fine. If I were you I would:
Use the data import wizard from within Microsoft SQL Studio to import the Excel file.
Simply get the data into a staging table in SQL.
You'll have the option to save this as an SSIS package, good for automation
Now, write a pile of SQL to sort and update the data as you wish. Perhaps make a series of stored procedures
Create a job in SQL that runs your package, and then runs each stored procedure
Writing a solution in this fashion will allow you to troubleshoot each step and make reporting easy. You can just do the whole thing is SSIS but like I said, I'm not a fan of that tool. I like my code on the command line as much as possible for troubleshooting :)
I used this app from windows store to convert Excel into SQL script.
Then send script to our DBA.
Our company is in the process of adapting TFS for source repository and project management. I am in charge of database part of the project. We are using SQL Server 2008 R2, Visual Studio 2012 and TFS Online. We have a database that is used by several of our applications. So far I have been the only one handling any change to this database. As the company is expending we are going to have multiple dev teams. So I am planning to save the database as as SSDT project to TFS.
At the moment I am maintaining my database like the following:
I have separate folders for UDFs, Stored Procedures, and Config.
Under these folders I have subfolders for each objects. For example, for stored procedures I have subfolders for each stored procedure which contains the SQL script to create the SP. The config folder contains any script similar to SSDT's post deployment script (for example, populating static data).
The SQL script contains code to drop the procedure and create it.
I have a c# app to concatenate all the SQL files into one single SQL file. Let's call it the FINAL script. When creating FINAL script I can specify version number which adds an update statement to update the version table on the database.
FINAL script is made available for customers to download and execute on the database. So the script mainly contains any add/edit to SPs, UDFs, and static data. It does not touch any existing data (data entered by user) in most cases.
As a newbie to TFS and SSDT I am not exactly sure how this can be done using SSDT/TFS or if there is better way of doing something similar. So far what I have understood about SSDT and TFS is:
I can import an existing database to SSDT project.
This will create scripts for all objects including tables.
I can easily do a publish of the database to a local server or to a server I have access to.
Things that seem confusing so far:
How do I supply clients with my latest update script? I am thinking of manually including the FINAL script to the SSDT project but there must be better way of doing it.
How do I publish the changes to a copy of the database without the loss of any user-entered data? My guess is when publishing the tables get created. I can take care of the static data but I am not sure how to handle data entered by users.
May be there is something fundamentally wrong in my understanding of this whole thing. That is why I am here... :)
You want to pull your DB into a SQL Project. Maintain all of your changes there. This tells your system what the schema of your database should be. From there, I'd generate the dacpac files (through building the project) and provide those to your clients along with having them install the SSDT tools that include SQLPackage. They can run SQLPackage to make changes to their database to handle the schema changes automatically. This will bring their database in line with your schema, no matter how far off it might be.
I'd also create a publish profile for them to use. This lets you control some of the settings.
You can choose to not drop any objects not in your project
You can choose to ignore users/permissions
You can set an option to not allow changes if there would be data loss.
You can wrap everything in a transaction so a failed update rolls back
If you give them a batch file to run, you can specify an output file or a Diff report, or have them generate their own script to do the update.
I blogged about this at http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
(or http://schottsql.blogspot.com/search/label/SSDT if that doesn't work well). That will take you through some basics of why you might want to use SQL Projects, creating them, maintaining them, and publishing the changes to an existing database.
I am having about 1200 tables in my oracle database and need to import them to SQL Server database. But I would like to configure the import in such a way that at any given import, I should be able to select the tables that need to be imported.
So, I have an custom XML file listing all the tables and a flag for each table indicating whether that table is to be imported or not. Also I have created the package to import all the tables and would like to modify this to check table if that is to be imported from XML file at runtime.
I was thinking to implement something like given here, but don't want to do this for these many tables and also don't know whether it'll do the job.
How can I get around this? Can I use SSIS configuration file for this (not sure though)? Is there any way that I can read XML at runtime and import tables based on XML file (or any other file with key-value pairs).
Any help in any form would be greatly appreciated.
It might seem a lot of work, but this is how I'd approach:
Create one package for each table that needs to be imported - so 1200 packages.
Store the package names in a metadata table along with a flag column, indicating whether that packages needs to be executed or not.
Create a parent package.
Add an execute sql task in the parent package. SQL command like this: select PackageName from metadataTable where Flag =1 retrieves the list of packages that need to be executed.
Map the result set to an object variable.
Add a for each loop container.
Add Execute package task inside the for each loop container, and parametrize the package name property.
This whole setup reads the packages that needs to be executed, and executes them one after the other.
If you like this approach, check out Andy Leonard's SSIS framework.
Samuel Vanga has a solid approach. The only thing I would look at doing is using something to programmatically generate those 1200 packages.
Depending on your familiarity with the SSIS object model and general .NET development, I'd investigate an EzAPI if you enjoy coding.
Otherwise, look at BIML and the package generation feature of BIDSHelper. You do not need to by a license for Mist to create your BIML script, you can browse the existing scripts on BIMLScript and solve probably most of your needs. Copy, paste, generate.