Creating SSAS Tabular Partitions in SSMS is not reflected in SSDT solution - sql-server-2016

I'm using SSDT to create my Tabular model, I'm creating a table that I'm partitioning (Two weeks of data - 24 Partitions per year) See below.
Usually I'm preparing 2 years of data partitioned (meaning 48 partitions).
When I'm deploying the model to Analysis Services I can access it from SSMS by connecting to my Analysis Services instance
My question is,
I've managed to create an automated script that generates the XMLA query of creating the partitions in SSMS, I'm executing it and I can see the partitions being created, However when returning to SSDT and opening the solution these partitions are not reflected there. is there a way to "force" SSDT to read the meta data from the analysis services instance upon opening the solution again?
Additionally, If I continue developing the model in SSDT, once I'll deploy it again all the changes I made via SSMS will be overridden, is there a way to avoid that?
Creating partitions manually in SSDT can be very painful...
I've managed to create a script that will automate it, but not in SSDT
Any suggestions?

As userfl89 already pointed out, any partitions that you create in SSMS need to be "backported" into your SSDT project, for example by using the "Import From Server (Tabular)" option when creating a new project. Otherwise, you risk losing the partitions (and the data contained in them) when deploying from SSDT.
Alternatively, you can use BISM Normalizer - a plugin for Visual Studio - to merge changes (such as partitions) back and forth between SSDT and the deployed database.
There's also the Analysis Services Deployment Wizard which takes the contents of your projects \bin\ folder and lets you deploy to a database, specifying that you don't want to overwrite existing partitions.
Lastly, if you haven't already, I would recommend taking a look at Tabular Editor. It's an alternative to SSDT for developing the model, so there will be some learning involved of course, but the good news is that you can do partial deployments, in order to avoid affecting the partitions on the already deployed database.

The database that you're accessing in SSDT is your workspace database. The workspace database is essentially a local copy of the tabular model. The partitions you added to the model in SSMS were created, the workspace database is just out of sync. Your can overwrite your workspace database with the current version of the model by deleting/moving the files used in your local SSAS project, then creating a new Analysis Services project in SSDT and using the "Import From Server (Tabular)" option, then selecting the current version of the tabular model. This will create a new workspace database using the current version of the model. When doing this, make sure that when you delete or move the files from your local SSAS project, the files you move are for your local project, not the actual model. If you need to verify the location of the files used by the model, the DataDir property of the SSAS instance in SSMS will show this file path.

Related

Partitions created outside BIDS

In my cube, partition creation and processing is automated. Now, if I make any change to the cube structure through BIDS, while deploying the changes it will delete all the partitions which are not defined in the cube. Is there any way to avoid this?
Could you create a new solution in bids by importing from your server? That way any partition definitions will be present along with any new changes you make.
BIDS normally works with local files. Each deployment overwrites the structure on the server with the version you have in your local files.
If you do structural changes on the server independently from BIDS that you want to keep, but add changes in BIDS, you can just get the current state of the server structure back to local files by selecting File/New/Project/Business Intelligence Projects/Import Analysis Services Database. Make sure you set the project settings as required in the bottom of the dialog before hitting the OK button.
Another possibility to work with BIDS is in online mode: In this mode, BIDS does not work on local files, but directly on the structure as it is on the server. To use this mode, select File/Open/Analysis Services Database, and select the server and database you want to open. Some menu entries in BIDS change, and each time you hit the "Save" toolbar icon or its menu counterpart, the changes are directly written to the server structure. Note however, that you will not have a local copy of the database structure in this case, which means that e. g. version controlling your Analysis Services database structure is impossible.

How to deploy SQL script to clients

Our company is in the process of adapting TFS for source repository and project management. I am in charge of database part of the project. We are using SQL Server 2008 R2, Visual Studio 2012 and TFS Online. We have a database that is used by several of our applications. So far I have been the only one handling any change to this database. As the company is expending we are going to have multiple dev teams. So I am planning to save the database as as SSDT project to TFS.
At the moment I am maintaining my database like the following:
I have separate folders for UDFs, Stored Procedures, and Config.
Under these folders I have subfolders for each objects. For example, for stored procedures I have subfolders for each stored procedure which contains the SQL script to create the SP. The config folder contains any script similar to SSDT's post deployment script (for example, populating static data).
The SQL script contains code to drop the procedure and create it.
I have a c# app to concatenate all the SQL files into one single SQL file. Let's call it the FINAL script. When creating FINAL script I can specify version number which adds an update statement to update the version table on the database.
FINAL script is made available for customers to download and execute on the database. So the script mainly contains any add/edit to SPs, UDFs, and static data. It does not touch any existing data (data entered by user) in most cases.
As a newbie to TFS and SSDT I am not exactly sure how this can be done using SSDT/TFS or if there is better way of doing something similar. So far what I have understood about SSDT and TFS is:
I can import an existing database to SSDT project.
This will create scripts for all objects including tables.
I can easily do a publish of the database to a local server or to a server I have access to.
Things that seem confusing so far:
How do I supply clients with my latest update script? I am thinking of manually including the FINAL script to the SSDT project but there must be better way of doing it.
How do I publish the changes to a copy of the database without the loss of any user-entered data? My guess is when publishing the tables get created. I can take care of the static data but I am not sure how to handle data entered by users.
May be there is something fundamentally wrong in my understanding of this whole thing. That is why I am here... :)
You want to pull your DB into a SQL Project. Maintain all of your changes there. This tells your system what the schema of your database should be. From there, I'd generate the dacpac files (through building the project) and provide those to your clients along with having them install the SSDT tools that include SQLPackage. They can run SQLPackage to make changes to their database to handle the schema changes automatically. This will bring their database in line with your schema, no matter how far off it might be.
I'd also create a publish profile for them to use. This lets you control some of the settings.
You can choose to not drop any objects not in your project
You can choose to ignore users/permissions
You can set an option to not allow changes if there would be data loss.
You can wrap everything in a transaction so a failed update rolls back
If you give them a batch file to run, you can specify an output file or a Diff report, or have them generate their own script to do the update.
I blogged about this at http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
(or http://schottsql.blogspot.com/search/label/SSDT if that doesn't work well). That will take you through some basics of why you might want to use SQL Projects, creating them, maintaining them, and publishing the changes to an existing database.

Visual Studio Database Project - Generating test data on top of reference data

I am adding continuous integration testing to an existing Visual Studio 2010 database project. Right now we have a build that deploys an 'empty' database [dbo].[MyDb] with just the reference data needed such as locales and countries. Right now this is performed using sql files containing insert statements that are run in the post deployment sql build task.
I now want to add another test deployment build that will deploy to another database on the same staging server as [dbo].[MyDb].[Test] with the same reference data but with generated test data that will have foreign keys to the reference data. Database integration tests are then run against that. Because the state needs to be restored for each test, this needs to be as fast as possible.
From what I've tried so far, to generate the test data using Visual Studio's data generation plan it seems I need to get the reference data to a form that can be read by the Databound generator so that it can generate the test data in a way that maintains referential integrity.
The possible options I can think of are:
Somehow get the data generation plan to read the reference sql files?
Change the reference sql files to csv files and change the original build to do bulk inserts
Combine the builds so that the MyDb database is always deployed first and set it as the sequential databound generator source for the test db.
Has anyone got a better approach or can point to a good guide?
I'm not an expert on build scripts so would like to take advantage of tools to do as much as possible. I want to keep things as a Visual Studio Database project but I also have a license for RedGate's SQL Tools if that would make the testing easier.
It appears that handling of reference data still isn't supported very well by database projects. This is confirmed by the comments on this post by Barclay Hill.
At the moment I've gone with the option of having a reference database and using that with a sequential databound generator. Since it doesn't change very often I just deploy it manually and have stopped short of having a whole separate project just for that as I've seen elsewhere.
Hopefully reference data handling will be added to SQL Server Data Tools at some point.

Managing a subset of the database in a SQL Server 2008 DB Project

I'm new to using SQL Server 2008 DB Project's in VS 2010. I found a good intro to setting them up. It's nice how they create Tables, Stored Proc's etc as objects. But is it also a limitation?
I want to use this project to manage 1 stored procedure (for learning). I do not want to import the entire database because 90% of the database is stuff we do not manage.
I created a new project without doing the import process. I then added a new stored procedure. Now I am having difficulty getting the thing to build. I'm getting various errors saying that I have unresolved references to objects.
How can I add a new stored procedure..build it and deploy it to the database? Is it possible with this kind of SQL project or do I need to drop back to the old, simple type of SQL projects that VS 2008 and below used?
Update
According to another post, support for the Database Project type is gone. Support for my situation appears to have been erased.
UPDATE 2 3/21/2012
I installed MSSCCI which allows me to use SSMS directly with TFS 2010. I no longer needed and found the setup process to be unmanageable for a large database SQL 2008 project. Especially when you only manage a small % of the DB.
You can Partition a Database Project by Using Partial Projects. This allows the database project to know the entire schema of the database, at the same time, you need not maintain the entire schema. You can work with the subset of the database that's under active development, for instance (or the subset which is your responsibility), yet the project knows the entire schema. This permits it to create change scripts at deployment time, by comparing the schema in the project with the schema in the target database.
You must import all schema objects referenced by your new stored procedure. But this can become a large task because every referenced object need all it's references too.
More trouble with linked server objects.

How can I share a Data Source between multiple projects in Microsoft SQL Server 2005 Reporting Services and keep Visual Studio "Preview"?

I have a solution that contains multiple reporting projects (one per target deployment folder - I think this is the only way to achieve this effect, at least until I abandon Visual Studio for report deployment).
I want to specify my data source information "once and only once" for all these reports.
So far, I have created a separate reporting project that contains my shared data source. If I deploy things to a reporting server in the right order and offer sufficient prayers to appropriate gods, the reports seem to link up to the shared data source there and run (at least via the Report Manager in IE).
When I am developing a report, though, I can no longer "Preview" to try it out locally - I now must deploy it to a report server to try running it. This is a hassle.
Is my only recourse to add a whole bunch of copies of a data source (pointing at my development database), one in each project, set those not to deploy off my machine, and (probably) exclude them from source control?
A technique (dirty trick?) I am playing with now is to copy my data source (.rds) into each project, close Visual Studio, then in the underlying files/folders:
Delete the copied .rds from my report projects (leaving only the one copy in my Data Sources project)
In each report project's project file (Foo.rptproj), change the text of the Project.DataSources.ProjectItem.FullPath element from My Shared Data Source.rds to ..\Data Sources\My Shared Data Source.rds
This way all reporting projects reference the same underlying file on the filesystem, so they share a single data source definition, but each project also kind of has a "local" shared data source, so Visual Studio is kept happy.
Regarding source control: there is still only one copy of the .rds checked in, so we're not polluting the code base with lots of icky duplicates; the changes to the .rptproj files can be checked in, so we're not forcing developers into unnatural source-control gymnastics (selective partial commits etc.) to maintain a sane master copy.
Each reporting project will try to deploy this data source, though I've forbidden the overwriting of existing data sources on the server, so it's not too big a deal . . . and I suppose if I intended to overwrite the server's data source definition, it wouldn't really matter whether I overwrote it once or ten times with the same .rds.
Disclaimer: this is still an experiment. I don't have experience using this technique in practice yet, so I can't go so far as to actually recommend it.
Woody,
What we have tended to do is:
On the server have a folder called "DataSources", which is hidden from the users. In there will be all of the data sources.
For each reporting project in VS there will be a folder, also called "DataSources", but this time it will only contain the data source for this report.
As long as the folder structure is the same (i.e. report and data source have the same corresponding folder level on server and in VS) this seems to work for us.