Mondrian/Saiku - Closure Table - Null Pointer Exception - pentaho

I am currently doing a PoC and facing a problem with closure table. I am using Saiku CE and database is postgres. Everything works until I add a closure table.
If I remove closure table hierarchy, I don't get any error. If keep it I get the error. I have created my demo schema using Foodmart.xml which I downloaded from Saiku itself.
Some forums suggested that it's an open bug with Mondrian but if it is then why same syntax works with foodmart? Is it a problem with Saiku CE? If I use Saiku EE (Trial version) for my PoC then will it work?
11:54:17,900 WARN [RolapUtil] Mondrian: Warning: JDBC driver sun.jdbc.odbc.JdbcOdbcDriver not found
11:54:17,902 WARN [RolapUtil] Mondrian: Warning: JDBC driver oracle.jdbc.OracleDriver not found
11:54:18,728 ERROR [SecurityAwareConnectionManager] Error connecting: ersdemods
java.lang.NullPointerException
<Dimension name="Organisation" key="Org Id">
<Attributes>
<Attribute name="Par Org" table="org_organisation" keyColumn="parent_id" />
<Attribute name="Org Id" table="org_organisation" keyColumn="id" nameColumn="name" />
<Attribute name='Country Name' table='org_organisation' keyColumn='country' hasHierarchy='false' />
<Attribute name='County Name' table='org_organisation' hasHierarchy='false'>
<Key>
<Column name='country' />
<Column name='county' />
</Key>
<Name>
<Column name='county' />
</Name>
</Attribute>
<Attribute name='City Name' table='org_organisation' keyColumn='city' hasHierarchy='false' />
</Attributes>
<Hierarchies>
<Hierarchy name="Organisations" allMemberName="All Organisations">
<Level attribute="Org Id" parentAttribute="Par Org" nullParentValue="NULL">
<Closure table='organisation_closure' parentColumn="closure_parent_org_id" childColumn="org_id" />
</Level>
</Hierarchy>
<Hierarchy name='Oragnisation Location' allMemberName='All Org Location'>
<Level attribute='Country Name' />
<Level attribute='County Name' />
<Level attribute='City Name' />
</Hierarchy>
</Hierarchies>
</Dimension>
Regards,
Puneet Tayal

Managed to fix this issue. Dimension definition was correct however dimension with closure table should be declared within the cube.
If you declare them outside of the cube you would get this idiotic error.
Looks like a bug with Mondrian 4.
Regrads,
Puneet Tayal

Related

How to get SQL with parameter values on an exception

Hard to believe, but I can't seem to find a straight answer for this: How can I get the SQL statement including the parameter values when the statement generates an exception and only when it generates an exception. I know how to log the statement+parameters for every SQL generated, but that's way too much. When there's a System.Data.SqlClient.SqlException, though, it only provides the SQL, not the parameter values. How can I catch that at a point where I have access to the that data so that I can log it?
Based on a number of responses to various questions (not just mine), I've cobbled something together that does the trick. I think it could be useful to others as well, so I'm including a good deal of it here:
The basic idea is to
Have NH log all queries, pretty-printed and with the parameter values in situ
Throw all those logs out except the one just prior to the exception.
I use Log4Net, and the setup is like this:
<?xml version="1.0"?>
<log4net>
<appender name="RockAndRoll" type="Util.PrettySqlRollingFileAppender, Util">
<file type="log4net.Util.PatternString" >
<conversionPattern value="%env{Temp}\\%property{LogDir}\\MyApp.log" />
</file>
<DatePattern value="MM-dd-yyyy" />
<appendToFile value="true" />
<immediateFlush value="true" />
<rollingStyle value="Composite" />
<maxSizeRollBackups value="10" />
<maximumFileSize value="100MB" />
<staticLogFileName value="true" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date %-5level %logger - %message%newline" />
</layout>
</appender>
<appender name="ErrorBufferingAppender" type="log4net.Appender.BufferingForwardingAppender">
<bufferSize value="2" />
<lossy value="true" />
<evaluator type="log4net.Core.LevelEvaluator">
<threshold value="ERROR" />
</evaluator>
<appender-ref ref="RockAndRoll" />
<Fix value="0" />
</appender>
<logger name="NHibernate.SQL">
<additivity>false</additivity>
<appender-ref ref="ErrorBufferingAppender" />
<level value="debug" />
</logger>
<logger name="error-buffer">
<additivity>false</additivity>
<appender-ref ref="ErrorBufferingAppender" />
<level value="debug" />
</logger>
<root>
<level value="info" />
<appender-ref ref="RockAndRoll" />
</root>
</log4net>
The NHibernate.SQL logger logs all queries to the ErrorBufferingAppender, which keeps throwing them out and saves only the last one in its buffer. When I catch an exception I log one line at ERROR level to logger error-buffer, which passes it to ErrorBufferingAppender which -- because it's at ERROR level -- pushes it, along with the last query, out to RockAndRoll, my RollingFileAppender.
I implemented a subclass of RollingFileAppender called PrettySqlRollingFileAppender (which I'm happy to provide if anyone's interested) that takes the parameters from the end of the query and substitutes them inside the query itself, making it much more readable.
If you are using nhibernate for querying the db (as the tag presence on your question suggests), and your SQL dialect/driver relies on ADO, you should get a GenericADOException from the failing query.
Its Message property normally already include parameters values.
For example, executing the following failing query (provided you have at least one row in DB):
var result = session.Query<Entity>()
.Where(e => e.Name.Length / 0 == 1);
Yields a GenericADOException with message:
could not execute query
[ select entity0_.Id as Id1_0_, entity0_.Name as Name2_0_ from Entity entity0_ where len(entity0_.Name)/#p0=#p1 ]
Name:p1 - Value:0 Name:p2 - Value:1
The two literals, 0 and 1, of the query have been parameterized and their values are included in the message (with an index base mismatch: on hibernate queries, they are 1 based, while on the SQL query with my setup, they end up 0 based).
So there is nothing special to do to have them. Just log the exception message.
Have you just missed it, or were you asking something else indeed?
Your question was not explicit enough in my opinion. You should include a MVCE. It would have show me more precisely in which case you were not able of getting those parameters values.

Execution of Fetch XML using Web API Dynamics 365

I am using this approach for paging using Fetch XML within Web API.
Web API works perfect when we use simple fetch and it returns pagging cookie along with results if there are more records but when we use complex(where we have link entities) Fetch XML it returns paging cookie but empty e.g.:
<b><cookie pagenumber="2" pagingcookie="" istracking="False" /></b>
Here you can see we have nothing in pagingcookie.
Fetch XML used for query:
<fetch mapping="logical" version="1.0" output-format="xml-platform" count="10" page="1" >
<entity name="invoicedetail" >
<attribute name="invoicedetailid" />
<attribute name="uomid" />
<attribute name="quantity" />
<attribute name="manualdiscountamount" />
<attribute name="priceperunit" />
<attribute name="extendedamount" />
<filter>
<condition entityname="invoice" attribute="customerid" operator="eq" value="{5A8F8B46-2443-E511-80E3-3863BB351E10}" />
</filter>
<link-entity name="invoice" from="invoiceid" to="invoiceid" >
<attribute name="invoiceid" />
<attribute name="invoicenumber" />
<attribute name="description" />
<attribute name="totalamount" />
<order attribute="ss_postingdate" descending="true" />
</link-entity>
</entity>
</fetch>
If we remove linked entity it starts giving the exact paging cookie in the response that can be get using #Microsoft.Dynamics.CRM.fetchxmlpagingcookie.
Is there any way to get the exact paging cookie in complex scenarios?
I've seen this happen before in the standard organization service. I would suggest just paging without the paging cookie - this resolved the issue for me in the organization service.
E.g. <fetch mapping="logical" page="1" count="50">

Dynamics CRM 2015 - Retrieve all Contact that are in any way connected to Opportunity

I need to present all contact that have connection to opportunity in dialog. So when some opportunity has Sales Team, Stakeholders, Owners, etc. Something like Active Connection Associate View. I need a FetchXml or other ways to find all the Contact that are in any way connected to current opened Opportunity.
Your fetchXML query will look quite like this :
<fetch mapping='logical' version='1.0'>
<entity name='myentity'>
<filter>
<condition attribute='myentityid' operator='eq' value='myvalue' />
</filter>
<link-entity name='secondentity' from='stsr_materialid' to='stsr_material_casinglookup' alias='c' link-type='outer'>
<attribute name='stsr_name' />
<attribute name='stsr_code_aisi' />
<attribute name='stsr_code_astm' />
<attribute name='stsr_code_din' />
<attribute name='stsr_code_e' />
<attribute name='stsr_code_en' />
</link-entity>
If you need more help, please provide more context.

how to get a cross Joins in Fetch XML queries in CRM2011?

1.-First Time I do Queries as it then I dont know how to get a cross join
2.-I understand this example is a inner join isn't it? I take it for this line "from="accountid" to="customerid"" this sees as a Inner Join
then how would it be a cross join?
3.- I copied this example on http://mscrmshop.blogspot.mx/2012/09/outer-joins-in-fetch-xml-queries-in.html
<fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="false">
<entity name="opportunity">
<attribute name="name" />
<attribute name="customerid" />
<attribute name="estimatedvalue_base" />
<order attribute="name" descending="false" />
<link-entity name="account" from="accountid" to="customerid" visible="false" link-type="outer" alias="accountid">
<attribute name="telephone1" />
</link-entity>
</entity>
</fetch>
Fetch XML does not support a cross Join.
There is another alternative solution suggested in the below link. Please see whether that can be useful for your case.
http://social.microsoft.com/Forums/en-US/e3ee734c-81d3-4277-b54f-c2e46bb20e0d/crm-2011-sql-cross-join-equivalent-fetchxml-in-report?forum=crmdevelopment

MDX Query fails in Petaho BA server

My database connection is MySQL through JDBC, standard innodb.
From this, I created a very simple data source with two dimensions and one measure.
The two dimensions are:
location (string)
ddate (date/time, at least in Mysql)
The measure is the same "location" element, set to aggregate on count
I am trying to get a fairly simple count of items by location, by day of the week (either mon-sun or 1-7 is fine)
I have tried dozens of variations on the following, including "with member...as..." for extracting the day of week, all met with failure.
SELECT
NON EMPTY Hierarchize({ datepart("d",cdate([diDate.hDate].[mDate]) }) ON COLUMNS,
NON EMPTY {Hierarchize({[diLocation.hLocation].[mLocation].Members})} ON ROWS
FROM [k_olap]
Any help would be greatly appreciated. I've been banging my head against this for hours & hours, and it seems like it should be simple & straightforward for just a single portion of the date without needing to build a full time dimension in the schema, which Pentaho BA doesn't facilitate in the web server.
Here's the XML schema generated by Pentaho BA from the MySQL data source:
<Schema name="k_olap">
<Dimension name="diDate">
<Hierarchy name="hDate" hasAll="true" primaryKey="ID">
<Table name="post" schema="kn"/>
<Level name="mDate" uniqueMembers="false" column="ddate">
</Level>
</Hierarchy>
</Dimension>
<Dimension name="diLocation">
<Hierarchy name="hLocation" hasAll="true" primaryKey="ID">
<Table name="post" schema="kn"/>
<Level name="mLocation" uniqueMembers="false" column="location" type="String">
</Level>
</Hierarchy>
</Dimension>
<Cube name="k_olap">
<Table name="post" schema="kn"/>
<DimensionUsage name="diDate" source="diDate" foreignKey="ID"/>
<DimensionUsage name="diLocation" source="diLocation" foreignKey="ID"/>
<Measure name="mesLocation" column="location" aggregator="count" formatString="Standard"/>
</Cube>
</Schema>
Create a proper date dimension and hierarchy which has the day of week as a column in the table, then use that.
See here:
http://type-exit.org/adventures-with-open-source-bi/2010/07/a-simple-date-dimension-for-mondrian-cubes/