How can I share a Data Source between multiple projects in Microsoft SQL Server 2005 Reporting Services and keep Visual Studio "Preview"? - sql-server-2005

I have a solution that contains multiple reporting projects (one per target deployment folder - I think this is the only way to achieve this effect, at least until I abandon Visual Studio for report deployment).
I want to specify my data source information "once and only once" for all these reports.
So far, I have created a separate reporting project that contains my shared data source. If I deploy things to a reporting server in the right order and offer sufficient prayers to appropriate gods, the reports seem to link up to the shared data source there and run (at least via the Report Manager in IE).
When I am developing a report, though, I can no longer "Preview" to try it out locally - I now must deploy it to a report server to try running it. This is a hassle.
Is my only recourse to add a whole bunch of copies of a data source (pointing at my development database), one in each project, set those not to deploy off my machine, and (probably) exclude them from source control?

A technique (dirty trick?) I am playing with now is to copy my data source (.rds) into each project, close Visual Studio, then in the underlying files/folders:
Delete the copied .rds from my report projects (leaving only the one copy in my Data Sources project)
In each report project's project file (Foo.rptproj), change the text of the Project.DataSources.ProjectItem.FullPath element from My Shared Data Source.rds to ..\Data Sources\My Shared Data Source.rds
This way all reporting projects reference the same underlying file on the filesystem, so they share a single data source definition, but each project also kind of has a "local" shared data source, so Visual Studio is kept happy.
Regarding source control: there is still only one copy of the .rds checked in, so we're not polluting the code base with lots of icky duplicates; the changes to the .rptproj files can be checked in, so we're not forcing developers into unnatural source-control gymnastics (selective partial commits etc.) to maintain a sane master copy.
Each reporting project will try to deploy this data source, though I've forbidden the overwriting of existing data sources on the server, so it's not too big a deal . . . and I suppose if I intended to overwrite the server's data source definition, it wouldn't really matter whether I overwrote it once or ten times with the same .rds.
Disclaimer: this is still an experiment. I don't have experience using this technique in practice yet, so I can't go so far as to actually recommend it.

Woody,
What we have tended to do is:
On the server have a folder called "DataSources", which is hidden from the users. In there will be all of the data sources.
For each reporting project in VS there will be a folder, also called "DataSources", but this time it will only contain the data source for this report.
As long as the folder structure is the same (i.e. report and data source have the same corresponding folder level on server and in VS) this seems to work for us.

Related

Track changes made to a database

Background:
I have a MS SQL Server database and I want to track changes to it. For example if a column needed to be added or removed or a table needed to be dropped. Something similar to Version control for regular code.
The problem:
While looking around I saw that there were some tools that can be used:
RedGate SQL Source Control
Visual Studio Database project
I am more interested in knowing if either of these tools will track changes to my database? More specifically I have a TFS server that is the source control for my MVC code, can I use either of these with TFS? Will it allow us to restore from older versions? Will it allow multiple developers to work on the database simultaneously?
For this type of work, ApexSQL Source Control shown to be all that you need. With this SSMS add-in you can work directly on a database, and all of your changes will be tracked in real time.
Yes, several developers can work in the same time on the same database. When one developer works on a one or several objects, other developers can see which those objects are, and until the first one does not finish changing the others cannot change that object, they will not be allowed to.
If by any case, object is changed wrong, previous version or any earlier version can be restored at any moment.
This add-in has all necessary options and features to allow the developers to work without losing time for checking changes made against object, since the add-in does that for them. And you can always see by whom, when and what that change is.
Being in the database version control space for 5 years (as director of product management at DBmaestro) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts.
There are many reasons and I'll name a few:
Files are stored locally on the developer’s PC and the change s/he
makes do not affect other developers. Likewise, the developer is not
affected by changes made by her colleague. In database this is
(usually) not the case and developers share the same database
environment, so any change that were committed to the database affect
others.
Publishing code changes is done using the Check-In / Submit Changes /
etc. (depending on which source control tool you use). At that point,
the code from the local directory of the developer is inserted into
the source control repository. Developer who wants to get the latest
code need to request it from the source control tool. In database
the change already exists and impacts other data even if it was not
checked-in into the repository.
During the file check-in, the source control tool performs a conflict
check to see if the same file was modified and checked-in by another
developer during the time you modified your local copy. Again there
is no check for this in the database. If you alter a procedure from
your local PC and at the same time I modify the same procedure with
code form my local PC then we override each other’s changes.
The build process of code is done by getting the label / latest
version of the code to an empty directory and then perform a build –
compile. The output are binaries in which we copy & replace the
existing. We don't care what was before. In database we cannot
recreate the database as we need to maintain the data! Also the
deployment executes SQL scripts which were generated in the build
process.
When executing the SQL scripts (with the DDL, DCL, DML (for static
content) commands) you assume the current structure of the
environment match the structure when you create the scripts. If not,
then your scripts can fail as you are trying to add new column which
already exists.
Treating SQL scripts as code and manually generating them will cause
syntax errors, database dependencies errors, scripts that are not
reusable which complicate the task of developing, maintaining,
testing those scripts. In addition, those scripts may run on an
environment which is different from the one you though it would run
on.
Sometimes the script in the version control repository does not match
the structure of the object that was tested and then errors will
happen in production!
There are many more, but I think you got the picture.
What I found that works is the following:
Use an enforced version control system that enforces
check-out/check-in operations on the database objects. This will
make sure the version control repository matches the code that was
checked-in as it reads the metadata of the object in the check-in
operation and not as a separated step done manually. This also allow
several developers to work in parallel on the same database while
preventing them to accidently override each other code.
Use an impact analysis that utilize baselines as part of the
comparison to identify conflicts and identify if a change (when
comparing the object's structure between the source control
repository and the database) is a real change that origin from
development or a change that was origin from a different path and
then it should be skipped, such as different branch or an emergency
fix.
An article I wrote on this was published here, you are welcome to read it.
If you're looking for a product that will track changes into TFS from your SQL Server automatically, I'd invite you take a look at our product, Sql Historian. It's different from most other SQL version control systems (including the ones you've listed) in that it does not require developers to perform a check-in ritual to synchronize version control with what's already committed to the db.
However, features common with Sql Historian and the other two systems you mention are: working with TFS, the ability to view older versions of your db objects, and allowing multiple users on the db at the same time.

How to deploy SQL script to clients

Our company is in the process of adapting TFS for source repository and project management. I am in charge of database part of the project. We are using SQL Server 2008 R2, Visual Studio 2012 and TFS Online. We have a database that is used by several of our applications. So far I have been the only one handling any change to this database. As the company is expending we are going to have multiple dev teams. So I am planning to save the database as as SSDT project to TFS.
At the moment I am maintaining my database like the following:
I have separate folders for UDFs, Stored Procedures, and Config.
Under these folders I have subfolders for each objects. For example, for stored procedures I have subfolders for each stored procedure which contains the SQL script to create the SP. The config folder contains any script similar to SSDT's post deployment script (for example, populating static data).
The SQL script contains code to drop the procedure and create it.
I have a c# app to concatenate all the SQL files into one single SQL file. Let's call it the FINAL script. When creating FINAL script I can specify version number which adds an update statement to update the version table on the database.
FINAL script is made available for customers to download and execute on the database. So the script mainly contains any add/edit to SPs, UDFs, and static data. It does not touch any existing data (data entered by user) in most cases.
As a newbie to TFS and SSDT I am not exactly sure how this can be done using SSDT/TFS or if there is better way of doing something similar. So far what I have understood about SSDT and TFS is:
I can import an existing database to SSDT project.
This will create scripts for all objects including tables.
I can easily do a publish of the database to a local server or to a server I have access to.
Things that seem confusing so far:
How do I supply clients with my latest update script? I am thinking of manually including the FINAL script to the SSDT project but there must be better way of doing it.
How do I publish the changes to a copy of the database without the loss of any user-entered data? My guess is when publishing the tables get created. I can take care of the static data but I am not sure how to handle data entered by users.
May be there is something fundamentally wrong in my understanding of this whole thing. That is why I am here... :)
You want to pull your DB into a SQL Project. Maintain all of your changes there. This tells your system what the schema of your database should be. From there, I'd generate the dacpac files (through building the project) and provide those to your clients along with having them install the SSDT tools that include SQLPackage. They can run SQLPackage to make changes to their database to handle the schema changes automatically. This will bring their database in line with your schema, no matter how far off it might be.
I'd also create a publish profile for them to use. This lets you control some of the settings.
You can choose to not drop any objects not in your project
You can choose to ignore users/permissions
You can set an option to not allow changes if there would be data loss.
You can wrap everything in a transaction so a failed update rolls back
If you give them a batch file to run, you can specify an output file or a Diff report, or have them generate their own script to do the update.
I blogged about this at http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
(or http://schottsql.blogspot.com/search/label/SSDT if that doesn't work well). That will take you through some basics of why you might want to use SQL Projects, creating them, maintaining them, and publishing the changes to an existing database.

Visual Studio Database Project - Generating test data on top of reference data

I am adding continuous integration testing to an existing Visual Studio 2010 database project. Right now we have a build that deploys an 'empty' database [dbo].[MyDb] with just the reference data needed such as locales and countries. Right now this is performed using sql files containing insert statements that are run in the post deployment sql build task.
I now want to add another test deployment build that will deploy to another database on the same staging server as [dbo].[MyDb].[Test] with the same reference data but with generated test data that will have foreign keys to the reference data. Database integration tests are then run against that. Because the state needs to be restored for each test, this needs to be as fast as possible.
From what I've tried so far, to generate the test data using Visual Studio's data generation plan it seems I need to get the reference data to a form that can be read by the Databound generator so that it can generate the test data in a way that maintains referential integrity.
The possible options I can think of are:
Somehow get the data generation plan to read the reference sql files?
Change the reference sql files to csv files and change the original build to do bulk inserts
Combine the builds so that the MyDb database is always deployed first and set it as the sequential databound generator source for the test db.
Has anyone got a better approach or can point to a good guide?
I'm not an expert on build scripts so would like to take advantage of tools to do as much as possible. I want to keep things as a Visual Studio Database project but I also have a license for RedGate's SQL Tools if that would make the testing easier.
It appears that handling of reference data still isn't supported very well by database projects. This is confirmed by the comments on this post by Barclay Hill.
At the moment I've gone with the option of having a reference database and using that with a sequential databound generator. Since it doesn't change very often I just deploy it manually and have stopped short of having a whole separate project just for that as I've seen elsewhere.
Hopefully reference data handling will be added to SQL Server Data Tools at some point.

Visual Studio for SSRS 2008 - How to organize reports into subfolders in Solution Explorer?

Right now I have a project called reports with several reports. In solution explorer it looks like this:
Shared Data Sources
-- DEV
Reports
-- Report1
-- Report2
-- Report3
I want to make it look like this and have the same structure carry over to the report manager website when I click deploy.
Shared Data Sources
-- DEV
Folder A
-- Report1
Folder B
-- Report2
-- Report3
Anyone know how to do this?
I'm using SSRS 2005 - I think this part of it works in the same way as 2008.
As far as I can tell, you can't have folders within projects, but you can have multiple projects within a solution.
To create a new folder, right-click on the solution in the Solution Explorer and select Add>New Project...
Type in your new Project Name (eg. MyProject), and select Report Server Project from the list of Visual Studio installed templates. Click on OK, and your new Project should appear at the end of the list of projects in the Solution Explorer.
(There are other ways of setting up a new Reports project, but this seems to be the quickest.)
If you now right-click on your new Report Project and select Properties, you can see the TargetReportFolder, which will default to your new Project Name (eg. MyProject). When you deploy reports from SSRS, they are deployed to this location. (You can change the location, if you wish - I find it easier to keep track of what's going where by using the Project name.)
You will need to copy any data sources to be used in each project, into the data sources folder of all projects that use that data source. By default, OverwriteDataSources is set to false, so when you deploy a new report, it will use the data source already deployed to the Report Manager environment.
So to get the Report Manager structure that you want to see:
Create Projects called Folder A and Folder B
Move/copy Report1 into the Reports folder in Project Folder A
Move/copy Report2 and Report3 into the Reports folder in Project Folder B
Move/copy data source DEV into the Shared Data Sources folders in Projects Folder A and Folder B
Deploy your reports
Don't forget to check your changes into source control.
The way do it is similar to other posters, I have one solution with multiple projects (each project is named and put into the same folder name that I want it deployed under).
Then I rigged up a script in RS which:
- Creates a single data source used by all reports, in my case
- Loops through all directories in the solution folder
-- Creates the same folder name on the RS server
-- Deploys all files in this directory to that folder on the RS server
-- Uses rs.SetItemDataSources on each report to redirect it to my main data source
And that's basically it.
Caveats are you sometimes get files uploaded you didn't want to (like deleted reports with the .RDLs still hanging around). But you can script all around that, or just blow it away in RS and re-upload again.
Doing this, I have one script but can deploy a structure under numerous different parent folders, each with different data sources, and have all the reports in a folder drawing data from a different database. This lets me run 1x RS instance but have development, testing, training, etc areas.
We manage this with Linked Reports in SSRS. We deploy reports to a Report Distribution folder, hidden from users, then create Linked Reports in the Reporting Services web UI (an option in the Manage section for each report). You can create the Linked Report in any folder, so you can build the folder structure you want and put the linked reports in the appropriate place.
You can deploy everything to the single distrib folder from VS, and the linked reports are updated. This solves the sub-report and DataSource issues since all reports 'run from' the distrib folder. There's obviously a lot of setup up front, and creating a new environment is a hassle - It happens so infrequently that we haven't tried to automate it.
I have a BI project going in SSRS2008 with roughly 80 reports - and this is my experience with deploying reports into folders. This is my first foray into developing in Reporting Services, so any gurus please smack me if I'm out to lunch.
Initially, I used folders in source control to separate smaller reports by department to help keep me organized. That worked fine while I was initially developing my reports, however the first time I deployed the project to the report server the structure was completely flattened - so I gave up on using a folder structure to organize.
As far as I'm aware the only way you can create a folder structure in SSRS is to use the Report Manager UI and create folders on the Report Server. I'm assuming from there you would modify the path in the report properties in Visual Studio. Either that or you have to define the path when you first set up the report. I haven't tested this so YMMV.
So in conclusion: It is not possible to create the folders in BIDS and deploy your reports into folders utilizing the IDE. I hope this is addressed in 2008R2 because it's kind of a pain having all those reports thrown together in the Solution Explorer.
You can have the files in separate folders on the disk/source control however they list flat and sorted alphabetically in Visual Studio.
I cannot see a user interface to manage the above thing but if you edit the project file (single project file of a VS solution) you can specify the FullPath XML tag of each report file.
I created a report server type project and had no option to add another project. Instead I edited the SLN file and added my other projects. The first part of the SLN file then looked like this in the end to access my 2 projects in sub folders to the SLN file
Microsoft Visual Studio Solution File, Format Version 10.00
Visual Studio 2008
Project("{F14B399A-7131-4C87-9E4B-1186C45EF12D}") = "RestOfWorld", "RestOfWorld\RestOfWorld.rptproj", "{D24D5EEA-88A4-4375-802B-7CA877202787}"
EndProject
Project("{F14B399A-7131-4C87-9E4B-1186C45EF12D}") = "NorthAmerica", "NorthAmerica\NorthAmerica.rptproj", "{C64A3BDC-F526-4037-AD48-31799BECC3AD}"
EndProject
Global
#skiwii is correct, with VS 2013 Community edition and SQL Data Tools -- Business Intelligence, you do not see the Solution node at the top of the tree in VS Solution Explorer, and you do have to hack the .SLN file to expose it.
Start with the desired master .sln and one .rptproj file in your source tree. call that 'parent'.
Create a brand new, empty Report Services .rptproj file in a new subfolder (under the folder containing the .sln). e.g, project is 'child1'. VS will also give you a child1.sln file in that folder.
Close all VS windows.
With your favorite text editor (teco!)
open child1.sln.
Copy the 2 lines at the top; Project... and EndProject
Open parent1.sln
Paste those lines under the existing Project / EndProject.
Now open parent1.sln in VS. You will magically see the Solution node at the top of Solution Explorer window.
If you have been very lucky, you will also see a second project called child1. But if not, no problem. You can just right click on the solution and Add a new project of type Report Services.

Creating a CHANGE script in Management Studio?

I was wondering if there is a way to automatically append to a script file all the changes I am making to my columns, tables, relationships etc...
The thing is I am doing a lot of different changes on a TEST db and the idea will be to apply this change script when I move the test db to production... hence keeping production data but applying all schema and object changes.
Is there an easy way to do this? Can it also migrate database diagram changes?
I have seen how you can create a change script each time I do a change but this means I have to copy and paste into a master file. Actually pretty easy!
I was just wondering if I was missing something?
Do not make changes to the test server using the UI. Write scripts and keep them under source control. You can test your scripts starting from backups of the live data and you can tune yoru scripts untill they achieve the desired result. Then you can check in the scripts for reference and later apply them on the live server. See this article Version Control and Your Database.
BTW, check out the SSMS toolpack, I think it may do what you want (I'm not sure). My advice stand none the less: version your schema, use explicitly created/saved scripts, use source control.
There's no way to directly generate a "delta" script in SSMS.
However, if every time you publish changes, you script out the entire database, including data, to SQL using the SQL Server Database Publishing Wizard you should be able to extract diffs between the versions and get your deltas that way.
If money is no object, you can purchase Visual Studio Team System Database Architect edition and use its fantastic database comparison tools to generate and version control exactly the diffs you want.
Try using TableDiff , that came with SQL Server 2005.
SQL Server 2005 TableDiff Utility
tablediff Utility
We have the process where when a developer gets done with a change, they then script it out and check it into Subversion. In Subversion we have a folder for Tables, Stored Procs, Data, etc. They script it out so it is repeatable (i.e. don’t insert the new data if it is already there.) This is important to do anyway so you keep the history of changes for a given object in the database.
In the past, we would just enter each of the files that we wanted scripted out into a text file (i.e. FileListV102.txt). When we were ready to make a release we would do “get latest” on all of the files (from VSS back then.) We then had a simple utility that would read the “file list” file and open each of those files in turn concatenating them into an output file. That is pretty easy to code.
We outgrew that and now we have a release management tools (which can be found here and will be on sale mid September), that takes all of the files and creates a big SQL script file out of it. It does it in the order that you would expect based on the folder names – so files found in the "Tables" folder are done before those in the "Data" folder, etc.
Either way, once you are done you have a big SQL script file that you can then apply to a fresh copy of production and that is what you test against.
I know I'm way late to the party, but I just wanted to add that there are tens of third party products out there. Some are very good, some are very cheap or free, and some are a mixture. I listed 22 here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
We have been using a relatively new software called Kal Admin.
It has Change Management feature and let distributing selected changes to other databases very easily. We used to do it by comparing two databases but it not satisfy our need for change tracking.
BTW Kal Admin has Metadata and data compare capabilities as well.