I have several SSIS packages that use oData sources to pull data from SharePoint lists and libraries. Unfortunately, I'm not always notified when those lists and libraries are changed on the SharePoint server.....
Now, when those packages break, I know how to fix them manually. That's not a problem. What I would like to know is whether or not there's any way to dynamically refresh the package and recognize that there's more or less or different columns than there were before.......that way, I don't have to redevelop the package unless/until they need the new columns represented in the SQL Server database....
Related
I'm building a MS Project VSTO tool (written in C#) that in many instances needs to either read or write data from a field in MS Project. Since I don't always know what field will contain the data I need, I many times need to allow the user the option to select the field they want. Getting all the basic fields is easy, my issue arises if a user is in a Project Server environment and using Enterprise fields. So my question is 2 fold:
Is there a way to check if the user is in a Project Server environment?
Is there a way to easily get all of the custom enterprise fields that are being used in MS Project? I'd like to be able to capture these fields in a collection like a list or array.
Is there a way to check if the user is in a Project Server
environment?
Look at the collection of Profiles to see if there is a project server one and check its ConnectionState to see if it's connected to a project server.
Is there a way to easily get all of the custom enterprise fields that
are being used in MS Project? I'd like to be able to capture these
fields in a collection like a list or array.
If you have access to the project server, take a look at this page, Accessing Project Online enterprise custom fields. Without access to the server I suggest:
Loop through all tables and their fields to find enterprise ones.
Allow the user to enter the name of enterprise fields and store that information for future use so that it's a one-time 'setup' for the user.
I am new to Database and SSIS. Can anyone please let me know is there a way to look or view SQL code generated by SSIS transformations.
I know in BI reporting tools such as Business Objects, when we pull fields or columns into the reporting panel, we can view its corresponding SQL.
Similarly in SSIS, is there any option to view the SQL for SSIS Transformations.
Thanks in Advance
Raj
SSIS unlike other tools does not generate SQL per se, although you can include your own SQL inside tasks and components, but I guess you are not interested in the SQL that you write yourself but rather what SSIS is doing behind the scenes.
An SSIS package is essentially an XML-structured file with a collection of properties marking up the flow and process of its components. You can access to this xml file by right clicking on the package and selecting View Code:
The example above is an empty package so it's a very small XML file. In a complex package, this file can be very large as you will see all the tasks, components, parameters, variables, etc. as well as your own SQL code and C#/VB scripts if any.
When the project is built it generates a .ispac file which is no other thing that a zip file containing the package(s) in project plus a manifest, a content type and any other file required for the package to be deployed and executed.
You can see what is inside a .ispac by renaming to .zip and opening it. In this example I've built the above empty package and renamed the ispac to zip, then opened it :
In summary, unlike other tools that are purely SQL generators, in SSIS there is not much you can see about the generated code, all you can see is its structure as shown above.
Also, as mentioned by Marko Ivkovic in comments it might be possible to get some more info about what is happening at run time by using tools like SQL profiler.
Our company is in the process of adapting TFS for source repository and project management. I am in charge of database part of the project. We are using SQL Server 2008 R2, Visual Studio 2012 and TFS Online. We have a database that is used by several of our applications. So far I have been the only one handling any change to this database. As the company is expending we are going to have multiple dev teams. So I am planning to save the database as as SSDT project to TFS.
At the moment I am maintaining my database like the following:
I have separate folders for UDFs, Stored Procedures, and Config.
Under these folders I have subfolders for each objects. For example, for stored procedures I have subfolders for each stored procedure which contains the SQL script to create the SP. The config folder contains any script similar to SSDT's post deployment script (for example, populating static data).
The SQL script contains code to drop the procedure and create it.
I have a c# app to concatenate all the SQL files into one single SQL file. Let's call it the FINAL script. When creating FINAL script I can specify version number which adds an update statement to update the version table on the database.
FINAL script is made available for customers to download and execute on the database. So the script mainly contains any add/edit to SPs, UDFs, and static data. It does not touch any existing data (data entered by user) in most cases.
As a newbie to TFS and SSDT I am not exactly sure how this can be done using SSDT/TFS or if there is better way of doing something similar. So far what I have understood about SSDT and TFS is:
I can import an existing database to SSDT project.
This will create scripts for all objects including tables.
I can easily do a publish of the database to a local server or to a server I have access to.
Things that seem confusing so far:
How do I supply clients with my latest update script? I am thinking of manually including the FINAL script to the SSDT project but there must be better way of doing it.
How do I publish the changes to a copy of the database without the loss of any user-entered data? My guess is when publishing the tables get created. I can take care of the static data but I am not sure how to handle data entered by users.
May be there is something fundamentally wrong in my understanding of this whole thing. That is why I am here... :)
You want to pull your DB into a SQL Project. Maintain all of your changes there. This tells your system what the schema of your database should be. From there, I'd generate the dacpac files (through building the project) and provide those to your clients along with having them install the SSDT tools that include SQLPackage. They can run SQLPackage to make changes to their database to handle the schema changes automatically. This will bring their database in line with your schema, no matter how far off it might be.
I'd also create a publish profile for them to use. This lets you control some of the settings.
You can choose to not drop any objects not in your project
You can choose to ignore users/permissions
You can set an option to not allow changes if there would be data loss.
You can wrap everything in a transaction so a failed update rolls back
If you give them a batch file to run, you can specify an output file or a Diff report, or have them generate their own script to do the update.
I blogged about this at http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
(or http://schottsql.blogspot.com/search/label/SSDT if that doesn't work well). That will take you through some basics of why you might want to use SQL Projects, creating them, maintaining them, and publishing the changes to an existing database.
I am adding continuous integration testing to an existing Visual Studio 2010 database project. Right now we have a build that deploys an 'empty' database [dbo].[MyDb] with just the reference data needed such as locales and countries. Right now this is performed using sql files containing insert statements that are run in the post deployment sql build task.
I now want to add another test deployment build that will deploy to another database on the same staging server as [dbo].[MyDb].[Test] with the same reference data but with generated test data that will have foreign keys to the reference data. Database integration tests are then run against that. Because the state needs to be restored for each test, this needs to be as fast as possible.
From what I've tried so far, to generate the test data using Visual Studio's data generation plan it seems I need to get the reference data to a form that can be read by the Databound generator so that it can generate the test data in a way that maintains referential integrity.
The possible options I can think of are:
Somehow get the data generation plan to read the reference sql files?
Change the reference sql files to csv files and change the original build to do bulk inserts
Combine the builds so that the MyDb database is always deployed first and set it as the sequential databound generator source for the test db.
Has anyone got a better approach or can point to a good guide?
I'm not an expert on build scripts so would like to take advantage of tools to do as much as possible. I want to keep things as a Visual Studio Database project but I also have a license for RedGate's SQL Tools if that would make the testing easier.
It appears that handling of reference data still isn't supported very well by database projects. This is confirmed by the comments on this post by Barclay Hill.
At the moment I've gone with the option of having a reference database and using that with a sequential databound generator. Since it doesn't change very often I just deploy it manually and have stopped short of having a whole separate project just for that as I've seen elsewhere.
Hopefully reference data handling will be added to SQL Server Data Tools at some point.
I have a solution that contains multiple reporting projects (one per target deployment folder - I think this is the only way to achieve this effect, at least until I abandon Visual Studio for report deployment).
I want to specify my data source information "once and only once" for all these reports.
So far, I have created a separate reporting project that contains my shared data source. If I deploy things to a reporting server in the right order and offer sufficient prayers to appropriate gods, the reports seem to link up to the shared data source there and run (at least via the Report Manager in IE).
When I am developing a report, though, I can no longer "Preview" to try it out locally - I now must deploy it to a report server to try running it. This is a hassle.
Is my only recourse to add a whole bunch of copies of a data source (pointing at my development database), one in each project, set those not to deploy off my machine, and (probably) exclude them from source control?
A technique (dirty trick?) I am playing with now is to copy my data source (.rds) into each project, close Visual Studio, then in the underlying files/folders:
Delete the copied .rds from my report projects (leaving only the one copy in my Data Sources project)
In each report project's project file (Foo.rptproj), change the text of the Project.DataSources.ProjectItem.FullPath element from My Shared Data Source.rds to ..\Data Sources\My Shared Data Source.rds
This way all reporting projects reference the same underlying file on the filesystem, so they share a single data source definition, but each project also kind of has a "local" shared data source, so Visual Studio is kept happy.
Regarding source control: there is still only one copy of the .rds checked in, so we're not polluting the code base with lots of icky duplicates; the changes to the .rptproj files can be checked in, so we're not forcing developers into unnatural source-control gymnastics (selective partial commits etc.) to maintain a sane master copy.
Each reporting project will try to deploy this data source, though I've forbidden the overwriting of existing data sources on the server, so it's not too big a deal . . . and I suppose if I intended to overwrite the server's data source definition, it wouldn't really matter whether I overwrote it once or ten times with the same .rds.
Disclaimer: this is still an experiment. I don't have experience using this technique in practice yet, so I can't go so far as to actually recommend it.
Woody,
What we have tended to do is:
On the server have a folder called "DataSources", which is hidden from the users. In there will be all of the data sources.
For each reporting project in VS there will be a folder, also called "DataSources", but this time it will only contain the data source for this report.
As long as the folder structure is the same (i.e. report and data source have the same corresponding folder level on server and in VS) this seems to work for us.