Update Dimensions/Levels/Measures programmatically - ssas

Summary :I m involved in a project that requires us to update/upgrade a existing cube programmatically. Is this even possible (apart from using AMO)?
Details: We have a cube that deploys to all client environments via a installer. Now as we continue to develop, we make changes to the cube, like changes in calculated measures, adding a new level to a dimension or editing an existing level/measure. we need to deploy these changes to client environments in the form of an updates.
Now these environments are not directly accessible by us nor do they have bids installed, meaning, we cant use bids to make changes and deploy it to the prod environment. Hence the requirement of a script/s to accomplish it.
Is there an approach that can enable to release these updates to the cube programmatically (not via AMO)? eg: a reporcess of a cube can be triggered in the form of a xmla statement.
We also need to be considerate of any customization/s that the client would have made (like addition of measures or levels for a given dimension) and preserve them.
Please let me know if i have clearly explained the issue at hand.
Thanks
Srikanth

Instead of AMO, you can also directly issue XMLA ALTER statements. Actually, AMO converts everything to low level XMLA as well, which are then sent to the Analysis Services server. However, the official documentation of the XMLA ALTER statement at http://msdn.microsoft.com/en-us/library/ms186630.aspx is difficult to read. It would be easier to capture the XMLA statements resulting from the AMO issued by BIDS when you click deploy. You can do this via SQL Server Profiler as documented here: http://technet.microsoft.com/en-us/library/ms174946.aspx.
And, as soon as you have more than a few trivial changes, it may be much easier to re-deploy the complete Analysis Services database instead of capturing just the changes and trying to create ALTER statements.

Related

Updating SQL Schema in Multiple SQL Databases at One Time

Our issue is we have an online application with personally identifiable data. We have sold this application to multiple customers and the law in their States says that the data MUST be physically in their State. So this is why we have the identical database (not identical data) in different locations.
Right now we use RedGate SQL Compare, but as we continue to grow, doing this eight, nine, ten times for every update (be it a small stored procedure bug fix or a larger change creating a new table) is becoming more and more inefficient. Marketing is telling us five more states are on the way.
We've looked into a RedGate method, but its more coding and troubleshooting than its worth.
So...any ideas how to update the SCHEMA from one to many databases?
There is a function in SQL Management Studio that works. In SMS use CTRL-ALT-G. This brings up 'Registered Servers'. Under Local Groups you can create groups. Say one for testing and one for live. You then right-click on the local group you created and choose "New Server Registration". Under General tab you give it a name and then in Connection Properties tab, you select just one database. Keeping adding "New Server Registrations" for each database you want in the group. When done, just right click on your Group and choose New Query. Anything you put in there will run on ALL the databases in the group.
So, if all our databases are identical, and you need to make an update, use Redgate to do a Compare. Choose 'Create a Deployment Script' instead of 'Deploy Using SQL Compare' and copy the SQL. Right-click on the group and say "new query" paste and execute.
I'm assuming this is SQL Server since you specified "RedGate SQL Compare" and not "MySQL Compare". If it's not SQL Server, ignore this.
Without having to adopt a new toolset (or even pay RedGate for something) and since the database (not the data) is identical, you could set up a Central Management Server (Microsoft documentation on that here), register each individual SQL Server instance, build your deploy script (you can still use SQL Compare for this), and then use the CMS to simultaneously push the schema changes as you need them to all of the instances or to defined groups as you like.
This would assume you're using windows authentication for all the servers and that whoever does the deployments would have the same access across all of the servers, but it's a pretty decent solution for multi-server administration of this type in general and it's a solid feature that's been around for while (2008).
I work for Redgate, so I'd love to promote it even more, however, let's ignore it for the moment.
If you want to automate deployments to lots of servers at once, I'd suggest you look to tooling like Azure DevOps Pipelines or AWS DeveloperTools, or even a 3rd party product like Octopus or Jenkins. The idea is simple, use any tool you like, right up to just your keyboard, to create the artifacts needed for deployment (your T-SQL scripts for SQL Server). Then, the agents in one of these flow control tools does the heavy lifting of ensuring that script gets deployed to multiple locations. Because you can configure these agents with independent security, you don't have to have the same levels of security yourself that you'd need to control stuff through SSMS or the Central Management Server. Further, this method allows for very easy parallel execution. The only way you can do that yourself is through some pretty extensive PowerShell (or Python) work.
As much as I'd like to promote Redgate as part of this solution, it's actually not necessary (it's just better). You can generate the necessary artifacts any way you want. The important point is being able to control exactly how they get deployed, dealing with tracking the successful and failed deployments, varying levels of necessary security, all this stuff. That's exactly what tools like those I mention before are intended to do.
Also, yeah, this is a ton of work. Automating deployments is absolutely the way to go. However, it's not without labor. Instead of spending your time doing manual processes, prone to error, repetitive, boring and slow, you spend time, and effort, automating stuff. It's not so much that work gets eliminated, rather it gets reoriented. Then, you get all the benefits of that automation. However, you do have to maintain it, grow it, expand it, and deal with issues within it. All work.

Transfer data in SQL Server 2008 R2

I have two databases on separate servers (dev and production) I need to move my data from dev to production from multiple tables without affecting the pre-existing data on production. Any idea if SQL Manager support something like this or am I going to have to write a script for it?
My situation in detail:
I have a tool which allow me to create surveys for my company. The tool is located on dev and also on production. Since I don't want to add test data in my production db I am using the dev version of the tool to create my surveys and test them locally. The tool is tied to few tables in my db such as surveys, questions, anwers, results, etc.
My current setup: When I am done with a survey and it is ready to launch, I have to use the production version of the tool to manually redo all of the previous work that i did on production. This is not ideal at all not only because of the time that I have to spend doing it but also risking making mistakes during the manual copying.
What I need to do:
Those tables that I mentioned above, already have production data in them and they are available for my company to use. When I create a new survey I need to transfer only the specific records of the new survey (from all tables) from dev to production without affecting anything that I had there from before.
Use Import and Export Data
Or Add the DEV server as a linked server in your PROD server and then use INSERT/SELECT statements
You can use a database compare tool, for SQL Server I use SQL Delta, wich allows you to automatically create a script to run in the database you wish, http://www.sqldelta.com/
You're not going to find any out-of-the-box solutions for this, but there are tools that can help once you've got a clear idea of what you're trying to accomplish -- in detail. A little time spent at this point to make sure you're really clear on what you expect to have happen will pay huge dividends when you move to production.
The scenario you're describing sounds like you've got some configuration-type data in your database alongside your transactional, or domain data. In other words, you've got changes that need to be promoted from your development environment to production in order for your application to work properly. This isn't unusual, but you've got to be pretty deliberate and very careful when you set up a promotion plan for a scenario like this -- after all, you don't want to push test data to your production system along with your configuration changes. It's critical, therefore, to identify the tables you're going to push from dev to prod and make sure those are the only tables you're pushing in that direction.
You also mentioned something about "without affecting the pre-existing data on production". Can you tell us more about this (maybe an example)? Typically, you'd want to keep specific tables (by convention) set up to move changes in one direction only -- ie, from dev to prod. If you've got tables that need to contain merged changes, you're going to have to apply even more attention to getting this right, because you need to deal with merge errors -- what happens when you've got data to push and it's already present in the target database, for instance?
Once you've got a plan for what you actually want to move, some of the tools mentioned in other answers would probably work, or check out Redgate's tools (like SQL Data Compare) -- they make some really nice products to help with DB management tasks.
---- addendum ----
Based on edits to the question, here are a couple of additional thoughts:
(1) Allow your production surveys to have a "disabled" or "testing" mode, so you don't have to make your data changes in another environment. This allows you to be able to move stuff from dev to production only when actual development changes exist.
(2) Define a "package" mechanism to move a survey from one environment to another. This would allow you to deal with merge conflicts, ID changes, etc., generically and reliably. As a bonus, this would allow you to also move a production survey back to dev for debugging and testing purposes.

What are some good methods to push schema updates to end user databases?

This might be too broad, but it's a problem I'm having a bear of a time dealing with. We have an application that we distribute to our end users. It's running on top of a derby back end. We can push out code changes fairly easily, it'll go out to our server, see there's a new version, download, overwrite old code, and reboot.
But, as we change our code, we also alter the schema of the derby database. We don't have great methods to update this. Currently we can push SQL updates via FTP. When the program is connected to the internet, it looks for new SQL files, downloads them and runs.
Unfortunately a lot of our clients have limited Internet access, so they get these updates intermittently. Sometimes because they changes are big enough, their local DB schema gets out of sync with what we want. Or they get the code changes via CD but not the SQL changes (someone mails them the CD).
What I've been trying to do is create a SOAP service that can serve up XML representations of the schema. It's been a huge PITA to develop so far.
What are some methods people are currently using to maintain databases like this? I feel like I'm not the first to do this, so there might be better ways than what I'm doing.
Based on some comments here, here's an update:
Basically, I think we screwed ourselves early on by not adhering to a strict versioning of the DB, so I don't know how everyone's DB is at. A lot of people got custom installs built (groan at will). I need a tool that can tell the differences between their DB and a "official" copy.
I have a tool built, it kind of works, but there's so…many…things to keep track of.
Can you distribute the DB changes as part of the code changes? Then, when the app restarts, it checks if it needs to run any updates on the DB.
Obviously, you'll need to version the DB schema to avoid applying the same update more than once.
I know some applications that do this (mostly in Ruby, but also in Java).
If you already have an update mechanism in place in your application that can download a program to alter the installed source code, why not package and run the schema changes as a part of that upgrade process? I would just run the updates as a part of the Java application then.
My team at work handles these changes by using the MyBatis Migration tool, which represents each schema change as a single migration script which contains the "make change" and "rollback" steps. A changelog table is stored in the database which lists which updates have been applied to that database, which makes it easy for the migrate command to determine which updates it needs to apply when run. This specific tool is probably only really useful when you control the database and have the ability to run shell commands and scripts to alter the database, but you can use the same concepts in your approach - package each schema change as an atomic unit and run them from within your program to bring the schema up to the current version, which you can track in the db itself.
You'll need a table containing the version of the database that the user is running, and then you'll need code to upgrade from version n to version n+1. Assuming you have a database user that has access to do schema changes, you can apply schema changes the same way you're now applying code changes.

How to do deployment using alter script in ssas

Is any thing wrong if i create alter script on the entire database in analysis service in the development server SSMS and execute that script on the production server SSMS instead of deploying through BIDS?
no, you actually should never use BIDS to deploy to prod. BIDS will always overwrites the management settings(security and partition) of the target server.
the best option is to use the Deployment Wizard. It enables you to generate an incremental deployment script that updates the cube and dimension structures. Can customize how roles and partitions are handled. It uses as input files the XML output files generated by building the SSAS in BIDS and you can run on several modes:
Silent Mode (/s): Runs the utility in silent mode and not display any dialog boxes.
Answer file mode (/a): Do not deploy. Only modify the input files.
Output mode (/o): No user interface is displayed. Generate the XMLA script that would be sent to the deployment targets. Deployment will not occur.
If you want a complete synchronization, you can use the "Synchronize Database Wizard". It pretty much clones a database. When the destination database already exists, it performs metadata synchronization and incremental data synchronization. When the destination database does not exist, a full deployment and data synchronization is done.
I think the main disadvantage of scripting the whole database is that everything may be reprocessed. Also, if another team or team member is responsible for deploying the script it may be a lot harder to review and understand if everything is rebuilt with each update.
I work for Red Gate and we recently introduced a free tool called SSAS Compare to help manage this scenario. It helps you to create a script containing just the changes you want to deploy

SQL Server database change workflow best practices

The Background
My group has 4 SQL Server Databases:
Production
UAT
Test
Dev
I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production.
The Problem
The entire process is awkward for a few reasons.
Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway.
Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know).
We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc.
The Question
People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process?
For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.
I should clarify I'm not necessarily looking for diff tools. If that's the best way to sync our environments then it's fine, but if there's a better way I'm looking for that.
An example doing what I want really well are migrations in Ruby on Rails. Dead simple syntax, all changes are well documented automatically and by default, determining what migrations need to run is almost trivially easy. I'd love if there was something similar to this for SQL Server.
My ideal solution is 1) easy and 2) hard to mess up. Rails Migrations are both; everything I've done so far on SQL Server is neither.
Within our team, we handle database changes like this:
We (re-)generate a script which creates the complete database and check it into version control together with the other changes. We have 4 files: tables, user defined functions and views, stored procedures, and permissions. This is completely automated - only a double-click is needed to generate the script.
If a developer has to make changes to the database, she does so on her local db.
For every change, we create update scripts. Those are easy to create: The developer regenerates the db script of his local db. All the changes are now easy to identify thanks to version control. Most changes (new tables, new views etc) can simply be copied to the update script, other changes (adding columns for example) need to be created manually.
The update script is tested either on our common dev database, or by rolling back the local db to the last backup - which was created before starting to change the database. If it passes, it's time to commit the changes.
The update scripts follow a naming convention so everybody knows in which order to execute them.
This works fairly well for us, but still needs some coordination if several developers modify heavily the same tables and views. This doesn't happen often though.
The important points are:
database structure is only modified by scripts, except for the local developer's db. This is important.
SQL scripts are versioned by source control - the db can be created as it was at any point in the past
database backups are created regularly - at least before making changes to the db
changes to the db can be done quickly - because the scripts for those changes are created relatively easily.
However, if you have a lot of long lasting development branches for your projects, this may not work well.
It is by far not a perfect solution, and some special precautions are to be taken. For example, if there are updates which may fail depending on the data present in a database, the update should be tested on a copy of the production database.
In contrast to rails migrations, we do not create scripts to reverse the changes of an update. But this isn't always possible anyway, at least in respect to the data (the content of a dropped column is lost even if you recreate the column).
Version Control and your Database
The root of all things evil is making changes in the UI. SSMS is a DBA tool, not a developer one. Developers must use scripts to do any sort of changes to the database model/schema. Versioning your metadata and having upgrade script from every version N to version N+1 is the only way that is proven to work reliably. It is the solution SQL Server itself deploys to keep track of metadata changes (resource db changes).
Comparison tools like SQL Compare or vsdbcmd and .dbschema files from VS Database projects are just last resorts for shops that fail to do a proper versioned approach. They work in simple scenarios, but I see them all fail spectacularly in serious deployments. One just does not trust a tool to do a change to +5TB table if the tools tries to copy the data...
RedGate sells SQL Compare, an excellent tool to generate change scripts.
Visual Studio also has editions which support database compares. This was formerly called Database Edition.
Where I work, we abolished the Dev/Test/UAT/Prod separation long ago in favor of a very quick release cycle. If we put something broken in production, we will fix it quickly. Our customers are certainly happier, but in the risk avert corporate enterprise, it can be a hard sell.
There are several tools available for you. One is from Red-Gate called SQL Compare. Awesome and highly recommended. SQL Compare will let you do a diff in schemas between two databases and even build the sql change scripts for you.
Note they have been working on a SQL Server source control product for awhile now as well.
Another (if you're a visual studio shop) is the schema and data compare features that is part of Visual Studio (not sure which versions).
Agree that SQL Compare is an amazing tool.
However, we do not make any changes to the database structure or objects that are not scripted and saved in source control just like all other code. Then you know exactly what belongs in the version you are promoting because you have the scripts for that particular version.
It is a bad idea anyway to make structural changes through the GUI. If you havea lot of data, it is far slower than using alter table at least in SQL Server. You only want to use tested scripts to make changes to prod as well.
I agree with the comments made by marapet, where each change must be scripted.
The problem that you may be experiencing, however, is creating, testing and tracking these scripts.
Have a look at the patching engine used in DBSourceTools.
http://dbsourcetools.codeplex.com
It's been specifically designed to help developers get SQL server databases under source-code control.
This tool will allow you to baseline your database at a specific point, and create a named version (v1).
Then, create a deployment target - and increment the named version to v2.
Add patch scripts to the Patches directory for any changes to schema or data.
Finally, check the database and all patches into source-code control, to distribute with devs.
What this gives you is a repeatable process to test all patches to be applied from v1 to v2.
DBSourceTools also has functionality to help you create these scripts, i.e. schema compare or script data tools.
Once you are done, simply send all of the files in the patches directory to your DBA to upgrade from v1 to v2.
Have fun.
Another "Diff" tool for databases:
http://www.xsqlsoftware.com/Product/Sql_Data_Compare.aspx
Keep database version in a versioning table
Keep script file name that was successfully applied
Keep md5 sum of each sql script that has been applied. It should ignore spaces when calculate md5 sum. Must be effective.
Keep info about who applied a script Keep info about when a script was applied
Database should be verified on application start-up
New sql script should be applied automatically
If md5 sum of a script that was already applied is changed, error should be thrown (in a production mode)
When script have been released it must not be changed. It must be
immutable in a production environment.
Script should be written in a way, so it could be applied to different types of database (see liquibase)
Since most ddl statements are auto-committing on most databases, it is best to have a single ddl statement per SQL script.
DDL sql statement should be run in a way, so it can be executed several times without errors. Really helps in a dev mode, when you may edit script several times. For instance, create a new table, only if it does not exist, or even drop table before creating a new one. It will help you in a dev mode, with a script that has not been released, change it, clear md5 sum for this script, rerun it again.
Each sql script should be run in its own transaction.
Triggers/procedures should be dropped and created after each db
update.
Sql script is kept in a versioning system like svn
Name of a script contains date when it was committed, existing (jira) issue id, small description
Avoid adding rollback functionality in scripts (liquibase allow to do that). It makes them more complicated to write and support. If you use exactly one ddl statement per script, and dml statements are run within a
transaction, even failing a script will not be a big trouble to
resolve it
This is the workflow we have been using succesfully:
Development instance: SQL objects are created/updated/deleted in DB using MSSQL Studio and all operations are saved to scritps we include in each version of our code.
Moving to production: We compare schema between dev and prod db using SQL Schema Compare in Microsoft Visual Studio. We update prod using the same tool.