Schema published but not seen in BI Server with jpivot - pentaho

I created an schema in Schema Workbench and publish with no errors, but when I got in BI Server with the standard user admin, I choose New->jpivot, then it display the name of the schema I created but it does not display the cube. For reference the error I get from catalina.out is:
17:11:45,174 ERROR [PentahoDataSourceResolver] PentahoXmlaServlet.ERROR_0002 - IDatasourceService.UNABLE_TO_INSTANTIATE_OBJECT
org.pentaho.platform.api.data.DBDatasourceServiceException: javax.naming.NameNotFoundException: Name [Esquema Salario] is not bound in this Context. Unable to find [Esquema Salario].

Name [Esquema Salario] is not bound in this Context errors usually appear if you use JNDI name, which is not defined on your system. So, I assumed, that this is the name of the datasource, which you reference while publishing Mondrian schema files to BI server.
Xml file with Mondrian schema definition generated by Schema Workbench does not contain any information regarding how to connect to database. So, you need to specify these details when you upload your schema file on BI server (It's done on step 4 below).
But first you have to create the connection itself (steps 1-2):
Create new JDBC datasource:
Define connection parameters:
If cubes still don't appear after these steps, you may republish your cube: follow same steps as in step 1, but select "Analysis" instead of "JDBC" in the end.
Upload the xml file, generated by schema workbench and select the datasource, which you have created on step 2. .
If cube still does not appear - check your log again. If you see the same Name is not bound error, you may try to restart your BI server application (new connections usually get recongnized immediately, but if you had a connection with same name before, than you might need to restart tomcat).
If that does not work, than once again, check log files. I guess, you'll have some different error in this case.

I had the same problem as the OP (blank screen when clicking New View) with the latest version of Pentaho BI server 7.1 (latest at the moment) and even with the 6.0 version one, Pivot4J SNAPSHOT 1.0 plug in version (latest as of today), Schema Workbech 3.14 (latest as of today).
And as, in line with OP, my catalina.out log was also spitting the Name [DatasourceName] is not bound in this Context. Unable to find [DatasourceName].
After several trials and errors I noticed the problem showed up when I checked the "Register the XMLA Data Source" when publishing the schema on Schema Workbench. So to fix the problem I just unchecked it before publishing.
Another way to fix this is going to the Manage Datasources option on the BI server, Import Analysis, choosing the schema created by Schema Workbench, AND manually setting the data source parameter value EnableXmla to false and saving changes. Now the schema should show up when clicking on Create New > Pivot4J view.

Related

How do I determine the default HSQL database connection information in Talend Studio?

I am new to Talend Studio. I am doing data profiling using Talend Studio enterprise version 7.3. Generating PDF reports has gone smoothly.
Our group would like to produce some output that is different than what is automatically provided, so we would like to query the underlying reporting database. If I go to the Generated Report Settings pane, there is a Database Connection Settings area with a Db Type of HSQL. Clicking the Check button causes a popup that says "Connection successful and Datamart well configured", as shown below.
Great! Unfortunately, I have not been able to find the name of the database, so that I could connect to it and inspect it using an IDE. For example, I've read that I should be able to connect to the HSQL database using its built-in GUI query tool called DatabaseManager like so:
java -cp ../lib/hsqldb.jar org.hsqldb.util.DatabaseManager
and then setting the config to:
jdbc:hsqldb:file:databaseName
But what is the "databaseName"? Is it specified in a properties file or XML file somewhere?

SSDT tabular model data loading / ID error

I am new to Visual studio, and I am in the process of building my first tabular model in Visual studio, have been using powerBI for a year or so.
I have an Azure SQL database setup, with data tables. I can access the database fine through SSMS, Power BI etc fine. I can also access the data fine when using the data model on the workspace server rather than an integrated workspace with VS.
When using the model in integrated workspace mode, when I use "Getting data" through power query in my tabular project in visual studio, I can login into the database fine, I can preview the various data tables and open them in power query / transform etc. However, when I try to import the data - it can get past stage one of the processing - and I get the following error
“Failed to save modifications to the server. Error returned: 'OLE DB
or ODBC error: We're sorry, an error occurred during evaluation..”
Which leaves me with a table with headers for the columns - but no actual rows of data.
When I revert the data model back to workspace server (i.e. integrated workspace set to false) - I can upload / refresh the data.
So I was happily building my model on the workspace server - HOWEVER - when I came to adding roles and user to those roles - I cannot use the workspace server - or I get an ID error - as I am using an Azure AD. - see below
An error occurred while opening the model on the workspace database.
Reason: Failed to save modifications to the server. Error returned:
'ID cannot be specified for Azure Analysis Services role member:
'member#domain'.
This ID error is fixed when I use an integrated workspace - as recommended - but then I can't load the data in.
So I am stuck between:
integrated workspace with no data
workspace server - without the ability to add AzureAD IDs
Any help with either of the issues would be much appreciated
Thanks,
Laurence
I work with VS 2017 and experience exactly the same issue. I use SSDT compability level 1400.
I appears that simply run Visual Studio as administrator solves the issue with Integrated server.
Accidentally found this solution, I saw earlier some people had solved inability to connect to workspace server without admin rights.

mondrian.xml schema files pentaho location

I'm creating cubes (xml schemas) via schema-workbench or ivy schema editor.
When I'm publishing it, I would like to know where the schemas (mondrian.xml files) are actually saved,What is the location of these files?
Thanks,
Which version of Pentaho BA server are you using? Pre or Post 5.0?
Pre 5.0: you choose the file path when publishing. The path is under you ${BISERVER}/pentaho-solutions.
5.0 and beyond: there's no physical file, it's stored in Pentaho's Jackrabbit repository only.
If you create any Analysis in Mondrian 5.*, and you have a test server and prod server, don't export Mondrian schemas from one to another. For a mystical reason you won't be able to get rid of them later.
This was my case, when I started searching for the pentaho-solutions/system/olap/datasources.xml in order to delete the problematic mondrian scheme. This source just doesn't exist anymore.
All data is now saved with the help of Jackrabbit. Jackrabbit stores your Mondrian Schemas together with all other Reports and Analysis to the database (the path is given in the jackrabbit preference file). But in the database you can only see their IDs. So there is no chance to get rid of one object - you leave everything or delete everything by truncating the table. The main problem is, that in the same table are saved not only schemas, but also all other reports, which you have downloaded to the server.

Deploy a mondrian schema in pentaho 5.1 without schema workbench

I have a question, in pentaho 5.1, how can I deploy a cube without using the schema workbench? I'm kind of newbie in Pentaho.
Is there a cmd line? Java code? Or something like...
Thanks a lot!
You can do that in the User Console.
There is a menu Manage Data source... There you can upload your xml and refer to a database connection for it.
First, I suppose you have installed BA Server and have made at least fact table.
In case you don't know what the fact table is, or someone else is reading this answer, you can find brief explanation here.
Of course, it's better to have full Star Schema. You cannot create Snow Flake inside Pentaho User Console. You can create it with Pentaho Schema Workbench or by manually edit mondarian.xml.
Make sure that your JDBC driver is inside BA Server driver directory. Look! Open Pentaho User Console. It's by default at localhost:8080/pentaho or yourdomain.name:8080/pentaho and login as administrator
File -> New -> Data Source
Choose Data source
Type Choose fact table and define connections to dimensions (if exists)
Choose to modify cube on the end of data source wizard
.

SSAS Cube processing

I have deployed my SSAS solution to production. On the production server, I want to process my cube, but when I right click on the ssas cube and choose "Process", I recieve the follwong error:
The 'Database' with 'ID' = 'XXX' doesn't exist in the collection.
Has anyone encountered this error in Microsoft SSAS? If so, can anyone tell me what to do to resove this error ?
Check to see what the Id of the database is by right clicking on the database and selecting properties. Check the equivalent in visual studio and confirm it matches. If you have renamed the analysis services project (maybe for a backup), the id of the database does not get renamed and sometimes gets messed up in the XML file.
Try re-processing the dimensions on their own and then the cube. Failing that, if this is the first process (i.e. the cube is not live), try deleting and redeploying from visual studio.
You can re-deploy the OLAP database from SSDT with a different name and before you do you just need to change the database name and set the Processing Option to Full. You can set these options under Project's Configuration Properties->Deployment.
I had the exact same problem. I solved it by editing the roles in my SSAS database :
And then, check the boxes depending on what you want your job to do :
Then, you can process your cubes with your SSIS Jobs.
it can either be roles or kerberos, I got a similar error when the kerberos authentication wasnt setup in the cube server to interact with database server.