Mondrian/Pivot4j error java.lang.IndexOutOfBoundsException with closure table - pentaho

This is the scenario:
Pentaho 5.4.0 CE
Pivot4j Plugin
If in Mondrian XML Schema I insert a dimension with a hierarchy type parent-child - defined using a closure table - with the following options, it works well (that is, I can see the result table and drill down to the parent-child hierarchy elements):
attribute "nameColumn" defined (reference a fact table field)
attribute "captionColumn" empty (no field assigned)
If I try to change the two attributes definition as
attribute "nameColumn" defined (reference a fact table field)
attribute "captionColumn" defined (reference another fact table field)
I get the error from Pivot4J java.lang.IndexOutOfBoundsException: Index: 2, Size: 2. It happen only with dimension defined using closure table, in standard dimension I can set the attributes at the same time with no error.
Any idea about it, how I can solve this? It's a problem because I need to use the captionColumn attribute that contains label value for the end-user, nameColumn instead contains a nickname. I've the same problem in Pentaho 6.0.
This is what I have in the pentaho log:
... Caused by: java.lang.IndexOutOfBoundsException: Index: 2, Size: 2
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at mondrian.rolap.SqlMemberSource.makeMember(SqlMemberSource.java:1072)
at mondrian.rolap.SqlMemberSource.getMemberChildren2(SqlMemberSource.java:1004)
at mondrian.rolap.SqlMemberSource.getMemberChildren(SqlMemberSource.java:881)
at mondrian.rolap.SqlMemberSource.getMemberChildren(SqlMemberSource.java:854)
at mondrian.rolap.SmartMemberReader.readMemberChildren(SmartMemberReader.java:249)
at mondrian.rolap.SmartMemberReader.getMemberChildren(SmartMemberReader.java:211)
at mondrian.rolap.RolapCubeHierarchy$CacheRolapCubeHierarchyMemberReader.readMemberChildren(RolapCubeHierarchy.java:600)
at mondrian.rolap.RolapCubeHierarchy$CacheRolapCubeHierarchyMemberReader.getMemberChildren(RolapCubeHierarchy.java:696)
at mondrian.rolap.SmartMemberReader.getMemberChildren(SmartMemberReader.java:177)
at mondrian.rolap.RestrictedMemberReader.getMemberChildren(RestrictedMemberReader.java:101)
at mondrian.rolap.SmartRestrictedMemberReader.getMemberChildren(SmartRestrictedMemberReader.java:85)
at mondrian.rolap.RolapSchemaReader.internalGetMemberChildren(RolapSchemaReader.java:186)
at mondrian.rolap.RolapSchemaReader.getMemberChildren(RolapSchemaReader.java:168)
at mondrian.rolap.RolapSchemaReader.getMemberChildren(RolapSchemaReader.java:162)
at mondrian.olap4j.MondrianOlap4jMember$3.execute(MondrianOlap4jMember.java:111)
at mondrian.olap4j.MondrianOlap4jMember$3.execute(MondrianOlap4jMember.java:110)
at mondrian.server.Locus.execute(Locus.java:86)
at mondrian.server.Locus.execute(Locus.java:71)
at mondrian.olap4j.MondrianOlap4jMember.getChildMemberCount(MondrianOlap4jMember.java:105)
at org.pivot4j.impl.QueryAdapter.canExpand(QueryAdapter.java:838)
at org.pivot4j.transform.impl.DrillExpandPositionImpl.canExpand(DrillExpandPositionImpl.java:44)
at org.pivot4j.ui.command.DrillExpandPositionCommand.canExecute(DrillExpandPositionCommand.java:69)
at org.pivot4j.ui.AbstractPivotRenderer.getCommands(AbstractPivotRenderer.java:146)
at org.pivot4j.ui.table.TableRenderer.access$100(TableRenderer.java:60)
at org.pivot4j.ui.table.TableRenderer$3.handleTreeNode(TableRenderer.java:649)
at org.pivot4j.ui.table.TableHeaderNode.walkChildrenAtColIndex(TableHeaderNode.java:915)
at org.pivot4j.ui.table.TableHeaderNode.walkChildrenAtColIndex(TableHeaderNode.java:931)
at org.pivot4j.ui.table.TableRenderer.renderBody(TableRenderer.java:611)
at org.pivot4j.ui.table.TableRenderer.render(TableRenderer.java:483)
at org.pivot4j.analytics.ui.ViewHandler.render(ViewHandler.java:597)
at org.pivot4j.analytics.ui.ViewHandler.structureChanged(ViewHandler.java:963)
at org.pivot4j.impl.PivotModelImpl.fireStructureChanged(PivotModelImpl.java:833)
at org.pivot4j.impl.PivotModelImpl$1.queryChanged(PivotModelImpl.java:111)
at org.pivot4j.impl.QueryAdapter.fireQueryChanged(QueryAdapter.java:197)
at org.pivot4j.impl.QueryAdapter.fireQueryChanged(QueryAdapter.java:182)
at org.pivot4j.impl.QueryAdapter.onQuaxChanged(QueryAdapter.java:1109)
at org.pivot4j.impl.QueryAdapter$1.quaxChanged(QueryAdapter.java:79)
at org.pivot4j.impl.Quax.fireQuaxChanged(Quax.java:163)
at org.pivot4j.impl.Quax.regeneratePosTree(Quax.java:648)
at org.pivot4j.transform.impl.PlaceHierarchiesOnAxesImpl.placeHierarchies(PlaceHierarchiesOnAxesImpl.java:88)
at org.pivot4j.transform.impl.PlaceHierarchiesOnAxesImpl.addHierarchy(PlaceHierarchiesOnAxesImpl.java:119)
at org.pivot4j.analytics.ui.NavigatorHandler.addHierarhy(NavigatorHandler.java:548)
at org.pivot4j.analytics.ui.NavigatorHandler.addHierarhy(NavigatorHandler.java:516)
at org.pivot4j.analytics.ui.NavigatorHandler.onDropOnAxis(NavigatorHandler.java:392)
at org.pivot4j.analytics.ui.NavigatorHandler.onDropOnAxis(NavigatorHandler.java:365)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.el.parser.AstValue.invoke(AstValue.java:191)
... 77 more

Related

Google Cloud Datalow:Getting a below error at runtime

I am writing data into nested array BQ table(array name inside the table is -merchant_array)using my dataflow template.
Sometime its running fine and loading the data but sometime its giving me that error at run time.
java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: com.fasterxml.jackson.databind.JsonMappingException: Null key for a Map not allowed in JSON (use a converting NullKeySerializer?) (through reference chain: com.google.api.services.bigquery.model.TableRow["null"])
"message" : "Error while reading data, error message: JSON parsing error in row starting at position 223615: Only optional fields can be set to NULL. Field: merchant_array; Value: NULL",
Anyone has any idea why I am getting this error.
Thanks in advance.
here I got the issue that was causing error so I am posting my own question's answer,it might be helpful for anyone.
So the error was like-
Only optional fields can be set to NULL. Field: merchant_array; Value: NULL",
And here merchant_array is defined as an array that contains record (repetitive) data.
As per google doc the the array can not be-
ARRAYs cannot be NULL.
NULL ARRAY elements cannot persist to a table.
At the same time I was using arraylist in my code, that allows null values. So before making a record type data in code or setting the data in arraylist, just remove the NULL tablerows if exist.
hope this will helpful.

Google Dataflow: how to insert RECORD non-repeated type field to Big Query?

I'm new to Dataflow. I've got a predefined-schema containing a non-repeated RECORD field called "device":
device.configId: STRING
device.version: STRING
Using a ParDo transform, I tried inserting a TableRow with this kind of field, as follows:
TableRow row = new TableRow();
row.put("field1", "val1");
TableRow device = new TableRow();
device.put("configId", "conf1");
device.put("version", "1.2.3");
row.put("device", device);
out.output(row);
I logged the table row, it looks like this:
{field1=val1, device={configId=conf1, version=1.2.3}}
I output it to a standard transform: BigQueryIO.write()
But the latter issues an error:
java.lang.RuntimeException: java.io.IOException:
Insert failed: [{"errors":[{
"debugInfo":"",
"location":"device.configid",
"message":"This field is not a record.",
"reason":"invalid"
}],"index":0}]
Not sure why, but note the location spells "configid" in lowecase - not in camel case as in the original log.
Any ideas on how to insert such an object to BigQuery?
Found out the problem. Apparently, this error message was caused only when the "configId" field was set to null rather than "conf1". To be exact, it was implicitly set to JSONObject.NULL coming from some input object.

BigQuery Java API to read an Array of Record : "Retrieving field value by name is not supported" exception

My current table in BigQuery has a column that uses complex types. The "family" column is actually a list ("repeated" feature) of records (with 2 fields: id & name).
When I try to get the 1st "id" value of 1 row with the following syntax:
FieldValueList c = qr.getValues().iterator().next();
c.get("family").getRepeatedValue().get(0).getRecordValue().get("id");
I get the exception:
Method threw 'java.lang.UnsupportedOperationException' exception.
Retrieving field value by name is not supported when there is no fields schema provided
This is a bit annoying because my table has a clearly defined schema. And when I do the "read" query with the same Java call, I can also see that this schema is correctly found:
qr.getSchema().getFields().get("family").getSubFields().toString();
-->
[Field{name=id, type=INTEGER, mode=NULLABLE, description=null}, Field{name=name, type=STRING, mode=NULLABLE, description=null}]
Due to this exception, the workaround that I have found is to pass the "index" of the record field instead of giving it its name
c.get("family").getRepeatedValue().get(0).getRecordValue().get(0).getLongValue();
However, this seeks awkward to pass an index instead of a name.
Is there a better way to get the value of a field in a record inside an array (if my column is only a record, without array, then I don't get the exception) ?
Is this exception normal?
You can wrap the unnamed FieldValueList with a named one using the "of" static method:
FieldList subSchema = qr.getSchema().getFields().get("family").getSubFields();
FieldValueList c = qr.getValues().iterator().next();
FieldValueList.of(
c.get("family").getRepeatedValue().get(0).getRecordValue(),
subSchema).get("id");
The "of" method takes a FieldValueList (returned by getRecordValue() in this case) and a FieldList (subSchema here), and returns the same FieldValueList but with named access.

Write to a dynamic BigQuery table through Apache Beam

I am getting the BigQuery table name at runtime and I pass that name to the BigQueryIO.write operation at the end of my pipeline to write to that table.
The code that I've written for it is:
rows.apply("write to BigQuery", BigQueryIO
.writeTableRows()
.withSchema(schema)
.to("projectID:DatasetID."+tablename)
.withWriteDisposition(WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED));
With this syntax I always get an error,
Exception in thread "main" java.lang.IllegalArgumentException: Table reference is not in [project_id]:[dataset_id].[table_id] format
How to pass the table name with the correct format when I don't know before hand which table it should put the data in? Any suggestions?
Thank You
Very late to the party on this however.
I suspect the issue is you were passing in a string not a table reference.
If you created a table reference I suspect you'd have no issues with the above code.
com.google.api.services.bigquery.model.TableReference table = new TableReference()
.setProjectId(projectID)
.setDatasetId(DatasetID)
.setTableId(tablename);
rows.apply("write to BigQuery", BigQueryIO
.writeTableRows()
.withSchema(schema)
.to(table)
.withWriteDisposition(WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED));

QueryDsl - issue with mapping query with case builder inside aggregates

I have a problem constructing a query which uses cases inside aggregates.
My setup: QueryDsl 3.3.2, JPA + Hibernate 3.6.10.Final
I Have two entities (psudo code):
Car {
private String ownerNumber;
private String color;
}
Client {
private String number;
}
There is no direct relationship between those two, just for some reporting simplicity Car holds Client number reference.
The first problem:
As I found on different SO posts I cannot use join on this etities using custom properties, I need to use for example cross join. Using: from client cl, car.c where cl.number=c.owner_number - the drawback is I'm not able to get all clients (which indicates lack of a car) - just thos that have connection for those two props. This is not too good for my reports but let's say I can live with that.
The second problem:
In the same query I also need to aggregate some data - I need to count all user cars, and let say all red. So I need sth like this:
SELECT c.number, count(c.id) AS total, SUM(CASE WHEN c.color='RED' THEN 1 ELSE 0 END) as allRed FROM client cl, car c WHERE cl.number=c.ownerNumber GROUP BY c.number;
This query works as expected;
For QueryDsl I have:
QClient client = Qclient.client;
QCar car = Qcar.car;
NumberExpression<Integer> redCarExp = new CaseBuilder()
.when(car.color.eq("RED"))
.then(1)
.otherwise(0);
List<Tuple> list = new JPAQuery(em)
.from(client, car)
.where(client.number.eq(car.ownerNumber))
.groupBy(client.number)
.list(client.number, car.id.count(), redCarExp.sum());
Running this generates JPAQuery shown below and exception. Without the last sum() part all works ok. I'm not sure where is the issue here.
java.lang.ClassCastException: org.hibernate.hql.ast.tree.ParameterNode cannot be cast to org.hibernate.hql.ast.tree.SelectExpression
org.hibernate.hql.ast.tree.CaseNode.getFirstThenNode(CaseNode.java:44)
org.hibernate.hql.ast.tree.CaseNode.getDataType(CaseNode.java:40)
org.hibernate.hql.ast.util.SessionFactoryHelper.findFunctionReturnType(SessionFactoryHelper.java:402)
org.hibernate.hql.ast.tree.AggregateNode.getDataType(AggregateNode.java:82)
org.hibernate.hql.ast.tree.SelectClause.initializeExplicitSelectClause(SelectClause.java:154)
org.hibernate.hql.ast.HqlSqlWalker.useSelectClause(HqlSqlWalker.java:857)
org.hibernate.hql.ast.HqlSqlWalker.processQuery(HqlSqlWalker.java:645)
org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:685)
org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:301)
org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:244)
org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:256)
org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:187)
org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:138)
org.hibernate.engine.query.HQLQueryPlan.(HQLQueryPlan.java:101)
org.hibernate.engine.query.HQLQueryPlan.(HQLQueryPlan.java:80)
org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:124)
org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:156)
org.hibernate.impl.AbstractSessionImpl.createQuery(AbstractSessionImpl.java:135)
org.hibernate.impl.SessionImpl.createQuery(SessionImpl.java:1770)
org.hibernate.ejb.AbstractEntityManagerImpl.createQuery(AbstractEntityManagerImpl.java:277)
sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerInvocationHandler.invoke(ExtendedEntityManagerCreator.java:342)
com.sun.proxy.$Proxy63.createQuery(Unknown Source)
sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.springframework.orm.jpa.SharedEntityManagerCreator$SharedEntityManagerInvocationHandler.invoke(SharedEntityManagerCreator.java:262)
com.sun.proxy.$Proxy63.createQuery(Unknown Source)
com.mysema.query.jpa.impl.AbstractJPAQuery.createQuery(AbstractJPAQuery.java:129)
com.mysema.query.jpa.impl.AbstractJPAQuery.createQuery(AbstractJPAQuery.java:97)
com.mysema.query.jpa.impl.AbstractJPAQuery.list(AbstractJPAQuery.java:242)
com.mysema.query.jpa.impl.AbstractJPAQuery.list(AbstractJPAQuery.java:236)