Automating SSAS deployment using AMO - sql-server-2012

I am trying to automate the build and deployment of a number of tabular cubes on SQL Server 2012.
At this point I can generate deployment artifacts (.asdatabase, .deploymentoptions and .deploymenttargets files), I can generate an XMLA using the Deployment Wizard from command prompt and I can deploy it using an AMO Power Shell script. However, Microsoft's recommendation is to use AMO for everything.
If I was to eliminate the Deployment Wizard from the equation, is there an AMO class that ingests the .asdatabase file and creates generates the database with all the objects as defined within the "Database" tag? or do I need to parse the XML, extract each element,find the equivalent Class/Method that will produce that object in the database? That seems orders of magnitude more complex than using the wizard.

I think sticking with the deployment wizard makes sense. It can be automated with command line parameters. I also suspect you can edit the Model.deploymentoptions file in the same folder as the .asdatabase file but I can't recall if you do that in the deployment wizard or manually.
Your other option is automating using custom msbuild task. Here is a walkthrough with screenshots. Note that once you upgrade to SSAS 2016 and 1200 compatibility models this option will break because the model is stored as JSON not XML.

Related

Viewing SQL for SQL Server Integration Services (SSIS) Transformations

I am new to Database and SSIS. Can anyone please let me know is there a way to look or view SQL code generated by SSIS transformations.
I know in BI reporting tools such as Business Objects, when we pull fields or columns into the reporting panel, we can view its corresponding SQL.
Similarly in SSIS, is there any option to view the SQL for SSIS Transformations.
Thanks in Advance
Raj
SSIS unlike other tools does not generate SQL per se, although you can include your own SQL inside tasks and components, but I guess you are not interested in the SQL that you write yourself but rather what SSIS is doing behind the scenes.
An SSIS package is essentially an XML-structured file with a collection of properties marking up the flow and process of its components. You can access to this xml file by right clicking on the package and selecting View Code:
The example above is an empty package so it's a very small XML file. In a complex package, this file can be very large as you will see all the tasks, components, parameters, variables, etc. as well as your own SQL code and C#/VB scripts if any.
When the project is built it generates a .ispac file which is no other thing that a zip file containing the package(s) in project plus a manifest, a content type and any other file required for the package to be deployed and executed.
You can see what is inside a .ispac by renaming to .zip and opening it. In this example I've built the above empty package and renamed the ispac to zip, then opened it :
In summary, unlike other tools that are purely SQL generators, in SSIS there is not much you can see about the generated code, all you can see is its structure as shown above.
Also, as mentioned by Marko Ivkovic in comments it might be possible to get some more info about what is happening at run time by using tools like SQL profiler.

What are the different approaches for deploying DB changes using TFS 2015?

Currently, we are manual running DB scripts (SQL Server 2012) outside of our CI/CD deployment. What are ways (including toolsets) can we automate deployment of DB changes using TFS 2015 Update 3?
There are really two approaches here, both of which work with TFS. Really TFS just facilitates the execution of any scripting that you will use to update your database, including your custom, handcrafted scripts.
There is the state based approach, which uses a comparison technology to look at your VCS/dev/test/staging database and compare this to production. SQL Source Control and the DLM Automation Suite from Redgate Software does this, as do other comparison tools. What you would do is use a command line or programmatic interface to set your source and target, capture the output and then use this as an artifact in your release process. I might include a review of the artifact as a scripting choice in your flow.
Note, there are some problems that State based comparisons don't do well with. Renames, splits, merges, data movement, a few others. Some comparison tools have ways around this, some do not. Be aware this may be an issue. If you have a more mature database, perhaps not, but you should consider this. SQL Source Control allows custom migration scripts, which can handle these issues.
The other approach is a script runner or migration strategy where each change you make to a dev database is captured as an ordered script and a framework executes these in order, if they are needed. This is preferred by some people since you can see exactly what code will be executed at dev and deployment time. ReadyRoll from Redgate Software, Liquibase, Rails Migrations, DBUp, FlywayDB, all use this strategy.
Neither of these is better or worse. Both work, both have pros and cons, but really the choice comes down to your comfort level and preference.
Disclosure: I work for Redgate Software.
If deploy DB changes just mean using SQL Server Database Projects (.sqlproj files) with Team Foundation Build in Team Foundation Server.
There are several ways can achieve this:
Use MSBuild task with some arguments to publish your SQL project
during build.
Add a deploy target in your sqlproj file,run the target after build
completes.
Or add a "Batch Script" step in your build
definition to run "SqlPackage.exe" to publish the .dacpac file.
More details please refer to this blog: Deploying SSDT During Local and Server Build .
As for using TFS2015, you can also try to use SQL Server Database Deployment task.
Use this task to deploy a SQL Server database to an existing SQL
Server instance. The task uses a DACPAC and SqlPackage.exe, which
provides fine-grained control over database creation and upgrades.

What is the best way to manage "non-SQL Server" SQL objects within Visual Studio 2010?

Visual Studio has a Database Project for Sql Server. This has a number of advantages: it hosts configuration settings, and database objects in one place. The .sql files are part of the regular .NET solutions - visible in the Solution Explorer and editable in Visual Studio. And they have a mechanism for generating a deployment script. With each individual database object in it's own file, the tracking of changes and source control is greatly simplified.
Has anyone had any success with using Database Projects with "non-SQL Server" databases? We use Sybase - which uses T-SQL and is very similar to SQL Server so I'm hopeful.
Or is there an alternative approach? I guess I could use a standard project (.csproj) and call a custom commandline application as part of the post-build to convert the .sql files into a deployment script.
Any ideas would be welcome.
Thanks
OK, I'll answer my own question.
I added all of our SQL objects to their own .sql files within a Visual Studio .dbproj project. However, minor syntactic incompatibilities between the Sybase version of RAISERROR and the Microsoft version of RAISERROR caused the validation code built into Visual Studio to get unhappy. The problem with the database project was that this actually caused a compilation error - which basically made it into a show-stopper.
So I scrapped that idea and added the .sql files to a standard .csproj project file. I then implemented some custom code that would load all of the .sql files, and aggregate them into a deployment script when invoked. I added a call to the custom code to the post build of the .csproj file so that whenever it was compiled - it would output a deployment script - which works like a dream with our build server.
In order to get some of the benefits of the .dbproj, I looked into writing a full SQL parser, but was quickly discouraged by some of the posts on SO. Instead I did some rudimmentary parsing with regex - which got me a few cool features without a lot of effort:
The code could detect dependencies between the various .sql files, and add them to the deployment script in the correct order to avoid sysdepends warnings.
Where there were no dependencies, objects were ordered based on the object type (stored procedure, function, grant statement, etc) and then by name so that the resulting script was always ordered the same - which is very important if you need to diff two versions of the script.
The deployment script can figure out some of the required permissions, so I don't need to keep track of all of the GRANT statements.
Stored procedures that are in the database but not in the script can be dropped automatically - so I don't need to keep track of what state each database is in - we just run the script and everything is in the correct state.
We have a few stored procedures that our automated tests call that shouldn't be deployed. The code can detect these and include them in a Debug build and exclude them in a Release build.
The custom code also generates a diff script that determines what changes the deployment script will make to a database and prints them out. This allows the person who is running the script to get an idea of what it will do. For example, the diff script might tell them that no changes will be made - so they don't need to run the deployment script at all - which is kind of handy if it saves them logging in at 3am to take a database offline and take backups etc.
So the end result is that all of my SQL objects are in separate files making them easy to work with in Visual Studio and manage under source control. For the first time since I started this job, I can look at the history in source control and tell what files have been changed (before this we had one enormous .sql file with absolutely everything in it).

Automatic incremental SQL Script generation for incremental, nightly builds when using Team Build in TFS 2008 and Visual Studio 2008?

hope that everybody here is OK.
We are using VS 2008 as development tool, TFS 2008 as version control as well as build automation. Some of our developer use dbpro for databases changes and some use SQL Server management studio.
I am trying to automate build for Web Application built using C# and VB.Net.
Our scenario is such that we have a central database to which our web application connects.
Whenever we supply our clients with a new functionality or a bug fix, we supply them incremental builds.
The SQL script is checked into source control for every incremental build when they have made and tested there changes on our central DB Server.
I want to generate Differential script that can be run at the client as an incremental update script. Now to come about it is a problem. Sometimes our developers tend to forget the database change-sets and the script in the source control is missing an SP or a two.
Also, sometimes we need to insert default data into some of the tables that have strict stringent values and not test values. Like a table that contains Services provided by the panel, we add a new service name, signature, credentials and service address, etc etc in the ServiceTable. Besides this many other tables may have test data that may not be needed.
If we use DataCompare, it will generate changeset for required data (important for client to enable certain services) and our test data that was added to the database as a result of our testing of the functionality or bug fix.
Currently i am using SQLSchemaCompareTask (from Visual Studio 2008 Team Database Professional Power Tools API) in the TFSBuild.proj file of the build definition for TFS 2008.
Using SQLSchemaCompareTask, the script generated contains database names like [dbo]. etc which are not desired as the script fails when run against SQL Server 2000 databses (Some of our client still use SQL Server 2000) databases as teh backend of the application.
Also default data can't be generated by this process.
To overcome this problem, i have to come up with a solution that can compare databases and generate script automatically that does not have to be manually reviewed again before being sent to the client.
Please suggest effective methodology of such SQL script generation and suggest whether two different databases may be used or something ? Is there any toolkit or api that can enable build automation for SQL Server databases?
Thank you all.
Regards
Steve
Try to use SQL Examiner Suite for this:
http://www.sqlaccessories.com/SQL_Examiner_Suite/
The tool compares both schema and data and produces synchronization scrips (or differentials scripts). You can automate script creation with supplied command-line tool.
Rather than collating many individual change set scripts (and therefore occasionally missing objects out), why not use schema compare and data compare to create a single script from your database project using a database equivalent to your client's on the target? This should create a script tailored to their requirements.
In data compare you can exclude test data records that you don't want pushed to your client by unchecking them in the lower grid.

Deploy only cube schema, without processing

Is there a way to only deploy cube schema, but without processing the cube. It seems in Visual Studio, when yo deploy a cube, by default, it is "Deploy and Process".
The problem is processing takes so much time, and my main purpose is just writing some MDX script and see if it works well against the cube structure. It seems processing whole cube is just over kill. So I ask.
You can also set this a a deployment option under the project properties. The value should be set to "do not process".
Yes, you can programatically deploy just the MDX script. There is also a download on Microsoft's web site called BIDS helper that has a facility to do this. It's a plugin for visual studio that provides various tools, including a facility to download a MDX script to a cube.