Partitions created outside BIDS - ssas

In my cube, partition creation and processing is automated. Now, if I make any change to the cube structure through BIDS, while deploying the changes it will delete all the partitions which are not defined in the cube. Is there any way to avoid this?

Could you create a new solution in bids by importing from your server? That way any partition definitions will be present along with any new changes you make.

BIDS normally works with local files. Each deployment overwrites the structure on the server with the version you have in your local files.
If you do structural changes on the server independently from BIDS that you want to keep, but add changes in BIDS, you can just get the current state of the server structure back to local files by selecting File/New/Project/Business Intelligence Projects/Import Analysis Services Database. Make sure you set the project settings as required in the bottom of the dialog before hitting the OK button.
Another possibility to work with BIDS is in online mode: In this mode, BIDS does not work on local files, but directly on the structure as it is on the server. To use this mode, select File/Open/Analysis Services Database, and select the server and database you want to open. Some menu entries in BIDS change, and each time you hit the "Save" toolbar icon or its menu counterpart, the changes are directly written to the server structure. Note however, that you will not have a local copy of the database structure in this case, which means that e. g. version controlling your Analysis Services database structure is impossible.

Related

Creating a local database from server database in visual studio

I have a rather large database I am working with and I am about ready to break something. To prevent this affecting live data, how would I use the live database to setup a local database? Not sure if this is even possible but I do know you can setup a local db.
You can create a SQL Server Data Tools database project type, then right click the project file and do an "Import..." to import the database to your local machine. Then you can deploy the local DB and it will be available in the SQL Server Object Explorer locally. This way you don't have to install SQL server on your machine - everything's in Visual Studio. Hopefully you are developing with a small set of data locally.
Answer
Use Visual Studio's Data Comparison tool to synchronize data to your target database from your source after you've created the database (schema only, no data) in your local database server.
Steps
From the Visual Studio's SQL Server Object Explorer:
A. Create the local database
Add two SQL Server Objects: One that connects to your production server and one that connects to a local (development/testing) server. If you need help setting up a local server then take a look at SQL Server LocalDB
Add a New database in your local server to receive the data (don't over think this step).
B. Migrate the Schema
Right-click the source (production) database and click Schema Compare...
From the SQlSchemaCompare tab that opens, use the Select Target dropdown to select your local database as the target.
From the SQlSchemaCompare tab, click Compare.
Uncheck everything in the comparison results except for the Tables, Views, and Procedures (unless you know what you're doing) then click Update.
C. Migrate the Data
Right-click the source (production) database and click Data Comparison...
Follow through the prompts to select the Tables to migrate then click Finish.
From the SQlDataCompare tab that opens, review the comparison results (it should make sense to you) then click Update Target
That's it! Either your local database is ready with data, or you confused your target/source and wiped out all of your data in production. Either way, you're done for the day.

Can I run my own DAX or MDX queries against Power BI?

I have a data model in Power BI desktop. I'd like to publish it to the server, but I'd also like to have an internal report run MDX (or DAX) queries against it. Is this possible? Can I just create a connection string and connect to Power BI like to a SSAS Cube? Maybe using the REST APIs?
Edit:
Thanks for your answers. Kyle gave me the best answer to my question, so I accepted his, but all of you made me clear that I'd better just use SSAS. This is what I did, with some hassle of seeing up HTTP bridge, but it works like a charm now.
It actually is possible in a literal sense - every time you run PowerBI, it creates a behind-the-scenes instance of SSAS Tabular that you can connect to and run queries against. Obviously this isn't directly supported by Microsoft, but I leave these steps in case anyone else wants to know how:
Navigate to %user%/AppData/Local/Temp/Power BI Desktop
Open your PowerBI Desktop model
A new folder will appear in the temp folder, inside that is a folder called AnalysisServicesWorkspace1111111111 (numbers at end are random)
Inside that folder is a file, msmdsrv.port.txt, which contains the port number (portnum) on which the SSAS Tabular model is running
You can open SSMS and connect to Analysis Services server localhost:portnum
The specific database instance you can find either via SSMS or the name of the GUID folder in the workspace folder (it'll be something like "33df46dd-8c77-46eb-bf01-8d545f626723.0.db")
Or you can use this as the server / catalog in an SSAS connection string i.e.
Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;
Initial Catalog=databasename;Data Source=localhost:portnum;
MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error
Also, for devs of note, inside that *.db folder is a SQLite database which contains all the PowerBI model metadata, you can modify it via code and have it persist as long as you do something trivial in the UI such as select add calculated column and then click away.
To my knowledge this is not possible. Whether there is a workaround or not, I don't know.
You're probably better served using SSAS and connecting to a model in that both from Power BI with the AS Connector and for whatever DAX queries you need to run against it.
By publish, if you mean to put it out on SharePoint, then, YES there is a way to access it.
PowerPivot for SharePoint actually consists of two components. First, there is the Service Application that runs in the SharePoint farm that is responsible for performing data refreshes, and usage analytics. The main part however is actually an instance of Analysis Services using the tabular engine. It’s properly referred to as Analysis Services SharePoint Mode, and as of SharePoint 2013/SQL Server 2012 SP1, it can be installed standalone. However, it is most commonly installed on SharePoint front end servers.
In the case above, the SharePoint front end server is named NautilusSP. You can also see that there is a model being hosted by the server already. The model is named by taking a workbook, and adding a GUID to it. This is done by Excel Services the first time that a model is interacted with. For example, if we add the file Health.xlsx, which contains an embedded PowerPivot model, and immediately refresh the object explorer in Management Studio, we will see that nothing has changed. However, if we then interact with the model at all, by clicking a slicer, or opening a pivot table category, we will see that the model has been automatically created for us.
Caveats:
These models are temporary. If they haven’t been used for a period of
time, they get deleted. Also, if the source workbook is updated, a new
model is automatically create upon first interaction. This can be seen
if we edit, and save our Health.xlsx workbook, and then open it in the
browser and interact with it.
The original model will be deleted in a garbage collection process. We
therefore cannot reliably target these models, as any reference will
become invalid relatively quickly.
The better and actually scalable option is to create a tabular model(we are talking SSAS here) and import this PowerPivot model into it.

Database monitoring Scripts Automation

I am working in the configuration part of the application, so i am new to this data base side configuration.
Database Oracle,SQL,DB2
I need some clarifications on below Questions:
How to monitor the database changes.
How to Track the changes in the database with any specific tool or script
How to roll back the database if to any specific point of change (like we are doing in source control management).
How compare last two changes in UI or with help any other tools.
You should check IBM Data Studio. Of course you can track changes made by Data Studio itself. If you issue a DDL statement outside of the Data Studio, Data studio will not be aware of that.

SSAS - Changing Target Server

I am new to analysis services and data cubes. I inherited someone else's project, and I am using BIDS 2005. The company I work for recently relocated my analysis database to another server--lets say from "Server1\tst1" to "Server2\tst1". Now every time I reopen BIDS and want to deploy my data cube to the new server, I have to go Project -> Properties -> Deployment -> and modify the target server value to deploy to the new location.
How do I change the default deployment location value as to not recreate this issue everytime I open BIDS and deploy?
The read/write attribute on the dwproj.user file was set to read only by source control. Once read/write was available, the change in BIDS was retained each time the application reopens.

Creating a CHANGE script in Management Studio?

I was wondering if there is a way to automatically append to a script file all the changes I am making to my columns, tables, relationships etc...
The thing is I am doing a lot of different changes on a TEST db and the idea will be to apply this change script when I move the test db to production... hence keeping production data but applying all schema and object changes.
Is there an easy way to do this? Can it also migrate database diagram changes?
I have seen how you can create a change script each time I do a change but this means I have to copy and paste into a master file. Actually pretty easy!
I was just wondering if I was missing something?
Do not make changes to the test server using the UI. Write scripts and keep them under source control. You can test your scripts starting from backups of the live data and you can tune yoru scripts untill they achieve the desired result. Then you can check in the scripts for reference and later apply them on the live server. See this article Version Control and Your Database.
BTW, check out the SSMS toolpack, I think it may do what you want (I'm not sure). My advice stand none the less: version your schema, use explicitly created/saved scripts, use source control.
There's no way to directly generate a "delta" script in SSMS.
However, if every time you publish changes, you script out the entire database, including data, to SQL using the SQL Server Database Publishing Wizard you should be able to extract diffs between the versions and get your deltas that way.
If money is no object, you can purchase Visual Studio Team System Database Architect edition and use its fantastic database comparison tools to generate and version control exactly the diffs you want.
Try using TableDiff , that came with SQL Server 2005.
SQL Server 2005 TableDiff Utility
tablediff Utility
We have the process where when a developer gets done with a change, they then script it out and check it into Subversion. In Subversion we have a folder for Tables, Stored Procs, Data, etc. They script it out so it is repeatable (i.e. don’t insert the new data if it is already there.) This is important to do anyway so you keep the history of changes for a given object in the database.
In the past, we would just enter each of the files that we wanted scripted out into a text file (i.e. FileListV102.txt). When we were ready to make a release we would do “get latest” on all of the files (from VSS back then.) We then had a simple utility that would read the “file list” file and open each of those files in turn concatenating them into an output file. That is pretty easy to code.
We outgrew that and now we have a release management tools (which can be found here and will be on sale mid September), that takes all of the files and creates a big SQL script file out of it. It does it in the order that you would expect based on the folder names – so files found in the "Tables" folder are done before those in the "Data" folder, etc.
Either way, once you are done you have a big SQL script file that you can then apply to a fresh copy of production and that is what you test against.
I know I'm way late to the party, but I just wanted to add that there are tens of third party products out there. Some are very good, some are very cheap or free, and some are a mixture. I listed 22 here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
We have been using a relatively new software called Kal Admin.
It has Change Management feature and let distributing selected changes to other databases very easily. We used to do it by comparing two databases but it not satisfy our need for change tracking.
BTW Kal Admin has Metadata and data compare capabilities as well.