I'm creating cubes (xml schemas) via schema-workbench or ivy schema editor.
When I'm publishing it, I would like to know where the schemas (mondrian.xml files) are actually saved,What is the location of these files?
Thanks,
Which version of Pentaho BA server are you using? Pre or Post 5.0?
Pre 5.0: you choose the file path when publishing. The path is under you ${BISERVER}/pentaho-solutions.
5.0 and beyond: there's no physical file, it's stored in Pentaho's Jackrabbit repository only.
If you create any Analysis in Mondrian 5.*, and you have a test server and prod server, don't export Mondrian schemas from one to another. For a mystical reason you won't be able to get rid of them later.
This was my case, when I started searching for the pentaho-solutions/system/olap/datasources.xml in order to delete the problematic mondrian scheme. This source just doesn't exist anymore.
All data is now saved with the help of Jackrabbit. Jackrabbit stores your Mondrian Schemas together with all other Reports and Analysis to the database (the path is given in the jackrabbit preference file). But in the database you can only see their IDs. So there is no chance to get rid of one object - you leave everything or delete everything by truncating the table. The main problem is, that in the same table are saved not only schemas, but also all other reports, which you have downloaded to the server.
Related
I'm using SSDT to create my Tabular model, I'm creating a table that I'm partitioning (Two weeks of data - 24 Partitions per year) See below.
Usually I'm preparing 2 years of data partitioned (meaning 48 partitions).
When I'm deploying the model to Analysis Services I can access it from SSMS by connecting to my Analysis Services instance
My question is,
I've managed to create an automated script that generates the XMLA query of creating the partitions in SSMS, I'm executing it and I can see the partitions being created, However when returning to SSDT and opening the solution these partitions are not reflected there. is there a way to "force" SSDT to read the meta data from the analysis services instance upon opening the solution again?
Additionally, If I continue developing the model in SSDT, once I'll deploy it again all the changes I made via SSMS will be overridden, is there a way to avoid that?
Creating partitions manually in SSDT can be very painful...
I've managed to create a script that will automate it, but not in SSDT
Any suggestions?
As userfl89 already pointed out, any partitions that you create in SSMS need to be "backported" into your SSDT project, for example by using the "Import From Server (Tabular)" option when creating a new project. Otherwise, you risk losing the partitions (and the data contained in them) when deploying from SSDT.
Alternatively, you can use BISM Normalizer - a plugin for Visual Studio - to merge changes (such as partitions) back and forth between SSDT and the deployed database.
There's also the Analysis Services Deployment Wizard which takes the contents of your projects \bin\ folder and lets you deploy to a database, specifying that you don't want to overwrite existing partitions.
Lastly, if you haven't already, I would recommend taking a look at Tabular Editor. It's an alternative to SSDT for developing the model, so there will be some learning involved of course, but the good news is that you can do partial deployments, in order to avoid affecting the partitions on the already deployed database.
The database that you're accessing in SSDT is your workspace database. The workspace database is essentially a local copy of the tabular model. The partitions you added to the model in SSMS were created, the workspace database is just out of sync. Your can overwrite your workspace database with the current version of the model by deleting/moving the files used in your local SSAS project, then creating a new Analysis Services project in SSDT and using the "Import From Server (Tabular)" option, then selecting the current version of the tabular model. This will create a new workspace database using the current version of the model. When doing this, make sure that when you delete or move the files from your local SSAS project, the files you move are for your local project, not the actual model. If you need to verify the location of the files used by the model, the DataDir property of the SSAS instance in SSMS will show this file path.
I created an schema in Schema Workbench and publish with no errors, but when I got in BI Server with the standard user admin, I choose New->jpivot, then it display the name of the schema I created but it does not display the cube. For reference the error I get from catalina.out is:
17:11:45,174 ERROR [PentahoDataSourceResolver] PentahoXmlaServlet.ERROR_0002 - IDatasourceService.UNABLE_TO_INSTANTIATE_OBJECT
org.pentaho.platform.api.data.DBDatasourceServiceException: javax.naming.NameNotFoundException: Name [Esquema Salario] is not bound in this Context. Unable to find [Esquema Salario].
Name [Esquema Salario] is not bound in this Context errors usually appear if you use JNDI name, which is not defined on your system. So, I assumed, that this is the name of the datasource, which you reference while publishing Mondrian schema files to BI server.
Xml file with Mondrian schema definition generated by Schema Workbench does not contain any information regarding how to connect to database. So, you need to specify these details when you upload your schema file on BI server (It's done on step 4 below).
But first you have to create the connection itself (steps 1-2):
Create new JDBC datasource:
Define connection parameters:
If cubes still don't appear after these steps, you may republish your cube: follow same steps as in step 1, but select "Analysis" instead of "JDBC" in the end.
Upload the xml file, generated by schema workbench and select the datasource, which you have created on step 2. .
If cube still does not appear - check your log again. If you see the same Name is not bound error, you may try to restart your BI server application (new connections usually get recongnized immediately, but if you had a connection with same name before, than you might need to restart tomcat).
If that does not work, than once again, check log files. I guess, you'll have some different error in this case.
I had the same problem as the OP (blank screen when clicking New View) with the latest version of Pentaho BI server 7.1 (latest at the moment) and even with the 6.0 version one, Pivot4J SNAPSHOT 1.0 plug in version (latest as of today), Schema Workbech 3.14 (latest as of today).
And as, in line with OP, my catalina.out log was also spitting the Name [DatasourceName] is not bound in this Context. Unable to find [DatasourceName].
After several trials and errors I noticed the problem showed up when I checked the "Register the XMLA Data Source" when publishing the schema on Schema Workbench. So to fix the problem I just unchecked it before publishing.
Another way to fix this is going to the Manage Datasources option on the BI server, Import Analysis, choosing the schema created by Schema Workbench, AND manually setting the data source parameter value EnableXmla to false and saving changes. Now the schema should show up when clicking on Create New > Pivot4J view.
Scenario: I have an IBM Domino web application (.NSF) and contains divisions (refer to the image). I want to migrate the content (such as blogs, main content and the article) to my WebSphere Portal. What is the first thing to do?
My understanding in migration is database to database migration and seems I don't know where to find the database for my contents knowing that database in creating domino application is built in. Advance thanks!
Refer to this image: http://postimg.org/image/irxsjdk8p/
The database is the NSF file - although it's possible that the information might be aggregated from several NSF files so you'll have to do some analysis. The first thing to do is to use the Notes client to identify the documents that contain the content that you want to migrate, check their properties to determine what NSF file they are stored in and what form they are based on, and then use Domino Designer to open the NSF file (or files, possibly) and analyze the form and determine which items contain the values that need to be migrated.
I have a question, in pentaho 5.1, how can I deploy a cube without using the schema workbench? I'm kind of newbie in Pentaho.
Is there a cmd line? Java code? Or something like...
Thanks a lot!
You can do that in the User Console.
There is a menu Manage Data source... There you can upload your xml and refer to a database connection for it.
First, I suppose you have installed BA Server and have made at least fact table.
In case you don't know what the fact table is, or someone else is reading this answer, you can find brief explanation here.
Of course, it's better to have full Star Schema. You cannot create Snow Flake inside Pentaho User Console. You can create it with Pentaho Schema Workbench or by manually edit mondarian.xml.
Make sure that your JDBC driver is inside BA Server driver directory. Look! Open Pentaho User Console. It's by default at localhost:8080/pentaho or yourdomain.name:8080/pentaho and login as administrator
File -> New -> Data Source
Choose Data source
Type Choose fact table and define connections to dimensions (if exists)
Choose to modify cube on the end of data source wizard
.
I have a solution that contains multiple reporting projects (one per target deployment folder - I think this is the only way to achieve this effect, at least until I abandon Visual Studio for report deployment).
I want to specify my data source information "once and only once" for all these reports.
So far, I have created a separate reporting project that contains my shared data source. If I deploy things to a reporting server in the right order and offer sufficient prayers to appropriate gods, the reports seem to link up to the shared data source there and run (at least via the Report Manager in IE).
When I am developing a report, though, I can no longer "Preview" to try it out locally - I now must deploy it to a report server to try running it. This is a hassle.
Is my only recourse to add a whole bunch of copies of a data source (pointing at my development database), one in each project, set those not to deploy off my machine, and (probably) exclude them from source control?
A technique (dirty trick?) I am playing with now is to copy my data source (.rds) into each project, close Visual Studio, then in the underlying files/folders:
Delete the copied .rds from my report projects (leaving only the one copy in my Data Sources project)
In each report project's project file (Foo.rptproj), change the text of the Project.DataSources.ProjectItem.FullPath element from My Shared Data Source.rds to ..\Data Sources\My Shared Data Source.rds
This way all reporting projects reference the same underlying file on the filesystem, so they share a single data source definition, but each project also kind of has a "local" shared data source, so Visual Studio is kept happy.
Regarding source control: there is still only one copy of the .rds checked in, so we're not polluting the code base with lots of icky duplicates; the changes to the .rptproj files can be checked in, so we're not forcing developers into unnatural source-control gymnastics (selective partial commits etc.) to maintain a sane master copy.
Each reporting project will try to deploy this data source, though I've forbidden the overwriting of existing data sources on the server, so it's not too big a deal . . . and I suppose if I intended to overwrite the server's data source definition, it wouldn't really matter whether I overwrote it once or ten times with the same .rds.
Disclaimer: this is still an experiment. I don't have experience using this technique in practice yet, so I can't go so far as to actually recommend it.
Woody,
What we have tended to do is:
On the server have a folder called "DataSources", which is hidden from the users. In there will be all of the data sources.
For each reporting project in VS there will be a folder, also called "DataSources", but this time it will only contain the data source for this report.
As long as the folder structure is the same (i.e. report and data source have the same corresponding folder level on server and in VS) this seems to work for us.