Is there a way to only deploy cube schema, but without processing the cube. It seems in Visual Studio, when yo deploy a cube, by default, it is "Deploy and Process".
The problem is processing takes so much time, and my main purpose is just writing some MDX script and see if it works well against the cube structure. It seems processing whole cube is just over kill. So I ask.
You can also set this a a deployment option under the project properties. The value should be set to "do not process".
Yes, you can programatically deploy just the MDX script. There is also a download on Microsoft's web site called BIDS helper that has a facility to do this. It's a plugin for visual studio that provides various tools, including a facility to download a MDX script to a cube.
Related
I am trying to automate the build and deployment of a number of tabular cubes on SQL Server 2012.
At this point I can generate deployment artifacts (.asdatabase, .deploymentoptions and .deploymenttargets files), I can generate an XMLA using the Deployment Wizard from command prompt and I can deploy it using an AMO Power Shell script. However, Microsoft's recommendation is to use AMO for everything.
If I was to eliminate the Deployment Wizard from the equation, is there an AMO class that ingests the .asdatabase file and creates generates the database with all the objects as defined within the "Database" tag? or do I need to parse the XML, extract each element,find the equivalent Class/Method that will produce that object in the database? That seems orders of magnitude more complex than using the wizard.
I think sticking with the deployment wizard makes sense. It can be automated with command line parameters. I also suspect you can edit the Model.deploymentoptions file in the same folder as the .asdatabase file but I can't recall if you do that in the deployment wizard or manually.
Your other option is automating using custom msbuild task. Here is a walkthrough with screenshots. Note that once you upgrade to SSAS 2016 and 1200 compatibility models this option will break because the model is stored as JSON not XML.
Currently, we are manual running DB scripts (SQL Server 2012) outside of our CI/CD deployment. What are ways (including toolsets) can we automate deployment of DB changes using TFS 2015 Update 3?
There are really two approaches here, both of which work with TFS. Really TFS just facilitates the execution of any scripting that you will use to update your database, including your custom, handcrafted scripts.
There is the state based approach, which uses a comparison technology to look at your VCS/dev/test/staging database and compare this to production. SQL Source Control and the DLM Automation Suite from Redgate Software does this, as do other comparison tools. What you would do is use a command line or programmatic interface to set your source and target, capture the output and then use this as an artifact in your release process. I might include a review of the artifact as a scripting choice in your flow.
Note, there are some problems that State based comparisons don't do well with. Renames, splits, merges, data movement, a few others. Some comparison tools have ways around this, some do not. Be aware this may be an issue. If you have a more mature database, perhaps not, but you should consider this. SQL Source Control allows custom migration scripts, which can handle these issues.
The other approach is a script runner or migration strategy where each change you make to a dev database is captured as an ordered script and a framework executes these in order, if they are needed. This is preferred by some people since you can see exactly what code will be executed at dev and deployment time. ReadyRoll from Redgate Software, Liquibase, Rails Migrations, DBUp, FlywayDB, all use this strategy.
Neither of these is better or worse. Both work, both have pros and cons, but really the choice comes down to your comfort level and preference.
Disclosure: I work for Redgate Software.
If deploy DB changes just mean using SQL Server Database Projects (.sqlproj files) with Team Foundation Build in Team Foundation Server.
There are several ways can achieve this:
Use MSBuild task with some arguments to publish your SQL project
during build.
Add a deploy target in your sqlproj file,run the target after build
completes.
Or add a "Batch Script" step in your build
definition to run "SqlPackage.exe" to publish the .dacpac file.
More details please refer to this blog: Deploying SSDT During Local and Server Build .
As for using TFS2015, you can also try to use SQL Server Database Deployment task.
Use this task to deploy a SQL Server database to an existing SQL
Server instance. The task uses a DACPAC and SqlPackage.exe, which
provides fine-grained control over database creation and upgrades.
I have a data model in Power BI desktop. I'd like to publish it to the server, but I'd also like to have an internal report run MDX (or DAX) queries against it. Is this possible? Can I just create a connection string and connect to Power BI like to a SSAS Cube? Maybe using the REST APIs?
Edit:
Thanks for your answers. Kyle gave me the best answer to my question, so I accepted his, but all of you made me clear that I'd better just use SSAS. This is what I did, with some hassle of seeing up HTTP bridge, but it works like a charm now.
It actually is possible in a literal sense - every time you run PowerBI, it creates a behind-the-scenes instance of SSAS Tabular that you can connect to and run queries against. Obviously this isn't directly supported by Microsoft, but I leave these steps in case anyone else wants to know how:
Navigate to %user%/AppData/Local/Temp/Power BI Desktop
Open your PowerBI Desktop model
A new folder will appear in the temp folder, inside that is a folder called AnalysisServicesWorkspace1111111111 (numbers at end are random)
Inside that folder is a file, msmdsrv.port.txt, which contains the port number (portnum) on which the SSAS Tabular model is running
You can open SSMS and connect to Analysis Services server localhost:portnum
The specific database instance you can find either via SSMS or the name of the GUID folder in the workspace folder (it'll be something like "33df46dd-8c77-46eb-bf01-8d545f626723.0.db")
Or you can use this as the server / catalog in an SSAS connection string i.e.
Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;
Initial Catalog=databasename;Data Source=localhost:portnum;
MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error
Also, for devs of note, inside that *.db folder is a SQLite database which contains all the PowerBI model metadata, you can modify it via code and have it persist as long as you do something trivial in the UI such as select add calculated column and then click away.
To my knowledge this is not possible. Whether there is a workaround or not, I don't know.
You're probably better served using SSAS and connecting to a model in that both from Power BI with the AS Connector and for whatever DAX queries you need to run against it.
By publish, if you mean to put it out on SharePoint, then, YES there is a way to access it.
PowerPivot for SharePoint actually consists of two components. First, there is the Service Application that runs in the SharePoint farm that is responsible for performing data refreshes, and usage analytics. The main part however is actually an instance of Analysis Services using the tabular engine. It’s properly referred to as Analysis Services SharePoint Mode, and as of SharePoint 2013/SQL Server 2012 SP1, it can be installed standalone. However, it is most commonly installed on SharePoint front end servers.
In the case above, the SharePoint front end server is named NautilusSP. You can also see that there is a model being hosted by the server already. The model is named by taking a workbook, and adding a GUID to it. This is done by Excel Services the first time that a model is interacted with. For example, if we add the file Health.xlsx, which contains an embedded PowerPivot model, and immediately refresh the object explorer in Management Studio, we will see that nothing has changed. However, if we then interact with the model at all, by clicking a slicer, or opening a pivot table category, we will see that the model has been automatically created for us.
Caveats:
These models are temporary. If they haven’t been used for a period of
time, they get deleted. Also, if the source workbook is updated, a new
model is automatically create upon first interaction. This can be seen
if we edit, and save our Health.xlsx workbook, and then open it in the
browser and interact with it.
The original model will be deleted in a garbage collection process. We
therefore cannot reliably target these models, as any reference will
become invalid relatively quickly.
The better and actually scalable option is to create a tabular model(we are talking SSAS here) and import this PowerPivot model into it.
We currently have a rather manual, fiddly, messy & error prone way of running SQL deployment scripts when we update our clients' software installations. We're considering finding a 3rd party SQL deployment tool to automate this process.
However, I'm pushing the idea of building our own SQL deployment tool into the application itself. It would be simple - on application startup, it would:
1) Check the existing database schema version (eg. "35")
2) Check against "up to date" database schema version (eg. "38")
3) Retrieve relevant SQL deployment scripts from resource files (eg. "36", "37", "38")
4) Lock the database and run each required SQL deployment script
Note that this would still be run by an IT technician in case any errors occurred, not by end users.
It seems unorthodox but I don't really see any problem. Your thoughts?
I don't see anything inheritly wrong with this.
At a company I've worked for, they built a custom SQL-script installer that would allow them to automatically apply changes to the database, roll back the changes if necessary, and keep tabs on the version of what's been applied.
No matter the desired result of the application, you'll need to set conventions (i.e. database releases should have this folder structure, etc.) and identify the needs and processes that will be used in running the tool (i.e. just how automated you'll make it)
Don't build your own. Far too common a problem for a bespoke solution.
You're looking for a database migration tool, my recommendation would be liquibase. It can be run from the command line or integrated into the build process. Unique features that are especially valuable to me is the generation of SQL upgrade (and downgrade) scripts, which are often demanded from us when supporting production installs.
For more a more detailed listing of alternative migration tools see the following answer:
Migrations for Java
Visual Studio has a Database Project for Sql Server. This has a number of advantages: it hosts configuration settings, and database objects in one place. The .sql files are part of the regular .NET solutions - visible in the Solution Explorer and editable in Visual Studio. And they have a mechanism for generating a deployment script. With each individual database object in it's own file, the tracking of changes and source control is greatly simplified.
Has anyone had any success with using Database Projects with "non-SQL Server" databases? We use Sybase - which uses T-SQL and is very similar to SQL Server so I'm hopeful.
Or is there an alternative approach? I guess I could use a standard project (.csproj) and call a custom commandline application as part of the post-build to convert the .sql files into a deployment script.
Any ideas would be welcome.
Thanks
OK, I'll answer my own question.
I added all of our SQL objects to their own .sql files within a Visual Studio .dbproj project. However, minor syntactic incompatibilities between the Sybase version of RAISERROR and the Microsoft version of RAISERROR caused the validation code built into Visual Studio to get unhappy. The problem with the database project was that this actually caused a compilation error - which basically made it into a show-stopper.
So I scrapped that idea and added the .sql files to a standard .csproj project file. I then implemented some custom code that would load all of the .sql files, and aggregate them into a deployment script when invoked. I added a call to the custom code to the post build of the .csproj file so that whenever it was compiled - it would output a deployment script - which works like a dream with our build server.
In order to get some of the benefits of the .dbproj, I looked into writing a full SQL parser, but was quickly discouraged by some of the posts on SO. Instead I did some rudimmentary parsing with regex - which got me a few cool features without a lot of effort:
The code could detect dependencies between the various .sql files, and add them to the deployment script in the correct order to avoid sysdepends warnings.
Where there were no dependencies, objects were ordered based on the object type (stored procedure, function, grant statement, etc) and then by name so that the resulting script was always ordered the same - which is very important if you need to diff two versions of the script.
The deployment script can figure out some of the required permissions, so I don't need to keep track of all of the GRANT statements.
Stored procedures that are in the database but not in the script can be dropped automatically - so I don't need to keep track of what state each database is in - we just run the script and everything is in the correct state.
We have a few stored procedures that our automated tests call that shouldn't be deployed. The code can detect these and include them in a Debug build and exclude them in a Release build.
The custom code also generates a diff script that determines what changes the deployment script will make to a database and prints them out. This allows the person who is running the script to get an idea of what it will do. For example, the diff script might tell them that no changes will be made - so they don't need to run the deployment script at all - which is kind of handy if it saves them logging in at 3am to take a database offline and take backups etc.
So the end result is that all of my SQL objects are in separate files making them easy to work with in Visual Studio and manage under source control. For the first time since I started this job, I can look at the history in source control and tell what files have been changed (before this we had one enormous .sql file with absolutely everything in it).