Is there any way to get a list of partitions inside a MeasuresGroup ?
And how can I process them with specific conditions on there name ?
Thanks for your help.
SSAS Tabular is organized in JSON format (pretty easy to read, just open the model.bim file and you will pretty fast get what you want, but as you are talking about Measuregroups you are in the word of Multidimensional cubes.)
SSAS Multidimensional is organized in XML format (I can recommend to analyze the structure of the XMLA-File of the cube to get a better understanding of the structure of SSAS MD. If you want to get your list manually this would be the way to go). How to get the XMLA-File?
Open your Cube Server in SSMS
Expand your Cube, Expand "Cubes", right click on your Cube >> "Script cube as" >> "Create To" >> "File"
Open the File with a XML Viewer. I can recommend MindFusion XML Viewer as it opens the overwhelming big XML file completely collapsed, which gives you the chance to better understand the structure
XMLA-File will look something like this and you can browse into the Partitions manually and search for your information:
As you were asking to something more flexible, what I can reccomend you is to use the AMO-library, which allows you to browse the exact XMLA-File directly on the server. Either via PowerShell or via C#. How?
DatabaseJournal.com has a very straight forward guideline on how to script dimensions with AMO. In the same fashion information about Measure Groups can be accessed. If you are interested on how to use Powershell with AMO to get a list of all the Partitions inside a Measure Group, give me a thumbs up - I can guide you a bit, but I'll leave it for now as I already spent half an hour writing this answer :-).
Related
I have tried to read up on this topic and I am still a bit unclear how to proceed. This seemed like a fairly basic task but it has been nowhere as simple as I had assumed. I have several SQL queries written and I want to be able to schedule them to run on a certain day each month and then automatically be exported to a .csv file in a selected folder. This will then allow them to be automatically uploaded into a BI and reporting tool that our firm uses (this part I know how to take care of).
I am fairly well versed in the writing of SQL queries, but everything beyond that I am pretty lost on. Right now I am using Microsoft SQL Management Studio 17. I thought that maybe scheduling jobs using the SQL Server Agent would be the solution, but the more I read about that and go down that path, the less I am convinced that it will allow me to export the query results into the .csv file that I need for it to be picked up. It is also important that these results are exported without headers.
Does anyone have any solutions for this? I am happy to answer any follow up questions if I am at all unclear.
You can create a job within the SQL server management studio to handle the whole thing.
http://social.msdn.microsoft.com/Forums/en/transactsql/thread/7d2280cf-3b33-46f7-ba82-4131e8a841c0
I'm using Pentaho with Saiku. It's usual to be exploring data and find some interesting info. I need then to save the query (columns, dimensions and filters) that resulted on that info.
But I can't find a way to do that. I'm able to click on the save button and save a *.saiku file. But when I open those files, Saiku shows an empty query, I need to select a cube and build the whole query again.
Is there an easy way to save a Saiku query and open it back, so that columns, rows and filters are already selected and the report (table or chart) is queried and presented automatically?
I had some problems saving Saiku queries recently, in the latest pentaho version. I think you can try the solution presented in http://jira.pentaho.com/browse/BISERVER-13666 that consists in moving some jar files:
Source folder: ../system/sparkl/lib
Destination folder: ../system/saiku/lib
files:
cpf-core-7.1.0.0-12.jar,
cpf-pentaho-7.1.0.0-12.jar,
cpk-core-7.1.0.0-12.jar
cpk-pentaho5-7.1.0.0-12.jar
--
Also, if you want some simpler and faster way, just for emergencies, I'd suggest you can switch your Saiku query to MDX mode and copy the resulting MDX query, so you can use it later.
I have a data model in Power BI desktop. I'd like to publish it to the server, but I'd also like to have an internal report run MDX (or DAX) queries against it. Is this possible? Can I just create a connection string and connect to Power BI like to a SSAS Cube? Maybe using the REST APIs?
Edit:
Thanks for your answers. Kyle gave me the best answer to my question, so I accepted his, but all of you made me clear that I'd better just use SSAS. This is what I did, with some hassle of seeing up HTTP bridge, but it works like a charm now.
It actually is possible in a literal sense - every time you run PowerBI, it creates a behind-the-scenes instance of SSAS Tabular that you can connect to and run queries against. Obviously this isn't directly supported by Microsoft, but I leave these steps in case anyone else wants to know how:
Navigate to %user%/AppData/Local/Temp/Power BI Desktop
Open your PowerBI Desktop model
A new folder will appear in the temp folder, inside that is a folder called AnalysisServicesWorkspace1111111111 (numbers at end are random)
Inside that folder is a file, msmdsrv.port.txt, which contains the port number (portnum) on which the SSAS Tabular model is running
You can open SSMS and connect to Analysis Services server localhost:portnum
The specific database instance you can find either via SSMS or the name of the GUID folder in the workspace folder (it'll be something like "33df46dd-8c77-46eb-bf01-8d545f626723.0.db")
Or you can use this as the server / catalog in an SSAS connection string i.e.
Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;
Initial Catalog=databasename;Data Source=localhost:portnum;
MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error
Also, for devs of note, inside that *.db folder is a SQLite database which contains all the PowerBI model metadata, you can modify it via code and have it persist as long as you do something trivial in the UI such as select add calculated column and then click away.
To my knowledge this is not possible. Whether there is a workaround or not, I don't know.
You're probably better served using SSAS and connecting to a model in that both from Power BI with the AS Connector and for whatever DAX queries you need to run against it.
By publish, if you mean to put it out on SharePoint, then, YES there is a way to access it.
PowerPivot for SharePoint actually consists of two components. First, there is the Service Application that runs in the SharePoint farm that is responsible for performing data refreshes, and usage analytics. The main part however is actually an instance of Analysis Services using the tabular engine. It’s properly referred to as Analysis Services SharePoint Mode, and as of SharePoint 2013/SQL Server 2012 SP1, it can be installed standalone. However, it is most commonly installed on SharePoint front end servers.
In the case above, the SharePoint front end server is named NautilusSP. You can also see that there is a model being hosted by the server already. The model is named by taking a workbook, and adding a GUID to it. This is done by Excel Services the first time that a model is interacted with. For example, if we add the file Health.xlsx, which contains an embedded PowerPivot model, and immediately refresh the object explorer in Management Studio, we will see that nothing has changed. However, if we then interact with the model at all, by clicking a slicer, or opening a pivot table category, we will see that the model has been automatically created for us.
Caveats:
These models are temporary. If they haven’t been used for a period of
time, they get deleted. Also, if the source workbook is updated, a new
model is automatically create upon first interaction. This can be seen
if we edit, and save our Health.xlsx workbook, and then open it in the
browser and interact with it.
The original model will be deleted in a garbage collection process. We
therefore cannot reliably target these models, as any reference will
become invalid relatively quickly.
The better and actually scalable option is to create a tabular model(we are talking SSAS here) and import this PowerPivot model into it.
I'm a programmer (mostly C++) who has moved into a non-software workplace. However, I don't have much experience with database stuff at all.
TL;DR: If we compare Crystal Reports to just writing scripts that execute SQL queries and parse the results, is there anything that CR can do that isn't possible via SQL queries & scripts? I'm talking purely in terms of extracting data - not making pretty documents.
Detail:
At my workplace they have a process where you run a bunch of Crystal Reports, modify the date range to the current month, manually export each to excel, delete the rows and columns that aren't needed, and then cut and paste into a summary excel document that is used by management.
To me, this is pretty crazy and stupid. I'd like to automate/script most of it.
So I have two options:
Learn Crystal Reports and try to modify the existing reports to be more automated.
Dump CR and just learn SQL and do the whole thing programmatically with scripts working with CSV files or something.
I'd much rather learn SQL since it's more general and useful. But I need to be assured that I can get the data output that I need (without writing a million lines of code to reproduce CR myself.)
So yeah, I'm looking for an answer like, "The two are equivalent. Anything you can do in CR you can do easily via scripts and SQL," or "If you need to group records into categories based on a parameter and then sum their one of their fields, then CR will do it much more easily than raw code," to push me in one direction or another.
Edit:
Some additional detail. At the moment my crystal reports run a database query, and then crystal does things like, "don't display the records that are returned, instead group the records by Field A and then display the count of how many records in each group."
Is functionality like this difficult to reproduce via SQL coding? I wouldnt want to have to write a python (or whatever) script to parse and manipulate the data from plaintext CSV, for example.
You can't just compare SQL and CR - they have different purpose. SQL (in this context) is data source, CR is pretty output formatter. For excel you would need data, not formatted output. Excel combined with SQL can give you all CR options (dynamic crosstab reports, charts etc) what you can't get directly from SQL data.
BTW, creating SQL views or procedures is often needed to overcome CR limitations; from this standpoint SQL has lot of more options than CR.
I personally would go with SQL+Excel route. In our company we're using simply SQL+CR without postprocessing, sometimes SQL+Excel. Our customers are using different approaches.
But like said by other people, choice of tools depends on more things. Who has to redesign reports? Who will maintain these reports? How often requirements change? Are there more uses for CR reports besides sourcing Excel tables? Who will be waked up at night, if reports do not work?
Management perpective:
In many I will say mostly cases management does not know SQL. So if a manager for E.g.HR wants to know staus about something then how he will get that status?? This is where Crystal reports come into picture, Using crystal reports they do not have to worry about SQL; they will just enter required fields and get their data.
Programmer perspective:
Simple data outputs can be achieved through SQL but consider a scenario where you need to pull details as well as summary. I agree it can be done via SQL but consider the overhead of time and proficiency required to develop such output using sql. I bet it wont be that easy to develop such output using sql as compared to crystal. So I will say learn both SQL and crystal, you will get to choose the tool to apply for your requirement.
You can write SQL and drop it into the Crystal Report. Best of both worlds, and possibly faster performance than the drag-and-drop Crystal functionality.
You will see some response time lag when the report runs.
There are actually a few things that Crystal Reports can do that are very tricky using plain SQL Queries as Crystal Reports can access the entire dataset in a single formula and can do things at runtime.
However unless you have some really crazy complex Crystal Reports I would recommend building a tool in Excel that can one click the info straight into a new sheet.
I did this and it got me a promotion, not kidding :P
I have a custom Excel Addin I can give you code to that basically does this:
On open, connects to the database and downloads a list of menu options connected to views and procedures
Adds these menu options into a new Ribbon tab within Excel
When one is clicked, runs the view and dumps the entire dataset (properly formatted) into a new sheet
Advantage of this is you can update the main menu list and each view it references without making any changes to the file or re-issuing anything to everyone.
Crystal could be helpful if you want to create a document with a specific layout , logos etc. and show some data on it. Export to excel from Crystal repot is not easy - usually there are a lot of empty columns and rows and each report should be tweaked to avoid that.
If you need to export some data from a SQLServer database to excel your best option will be SSIS ( I guess you have a license for SQL Server). If you don't have license for SSIS or you are using for example Access database there are also some inexpensive tools, which can retrieve data from any database ( not just SQLServer) and export it to excel. I would suggest you to check this one: http://www.r-tag.com. It can run Crystal reports and SQL reports so you can start using your crystal reports immediately and start transforming them to SQL reports whenever you have time for that. Both reports could be exported to excel.
i fixed this by editing excel sql, Left(Column_maxLength, 250)
this resolved my issue
in my case if even if i read left 250 character is enough
I'm trying to automate my deployment and I've be trying to use the VSDBCMD command line tool to compare schemas of my development and staging databases. I can get it to work comparing everything but what I can't figure out is how to filter out the objects I want to be compared. At the moment it compares everything which means it wants to add or remove users, full text catalogs, file groups etc.
Basically I just want to compare tables, stored procedures, views, functions and a few other things. From within visual studio you can set what objects to compare but I can't see from the documentation how to do this using the command line tool.
anyone have any ideas?
Unfortunately you can't. The best explanation I have seen is here: http://social.msdn.microsoft.com/Forums/en-US/vstsdb/thread/75656877-95e1-4c13-8540-8a445f47ca57
I'm not at my workstation now, but I believe that it is possible to filter out user scripts by checking the "ignore permissions" option in the db settings file. You might try experimenting with the other ignore settings to see if you can get closer to your goal that way.