I'm getting this error when I'm trying process DataMining with NestedTable in it.
Error 5 Errors in the metadata manager. The 'XYZZZZZ' dimension in the 'Dim XYZ' measure group has either zero or multiple granularity attributes.
Must have exactly one attribute. 0 0
Any idea why this is happening?
can you post your mining structure's code?
I think you have to create it with the MISSING_VALUE_SUBSTITUTION parameter to get rid of zero granularities. It always solves my proble when I have a times series with a gap on it
Related
I have Superset using Impala as the main data source. Most of the times, every query runs smoothly and I can build charts and dashboards with ease. I need to generate a Table Chart, containing around 100k records and 30+ columns, but I am having some issues. It is basically a SELECT *, no aggregations, filtering or ordering
are being used.
When the data is relatively big, Superset just throws a bunch of errors (It appears to be that the errors are coming from Impala). But I cannot find any
information regarding those errors. I have tried paginating the results, but it did not worked. Also, when I run the query in Superset Chart page, it doesn't take long, it just
displays the error. The only way some information gets displayed in the Table Chart is when I limit the rows at the "Row limit" option to 10 records. But, this will not work out for me.
Those are the errors that keep ocurring:
impala error: Invalid session id: f344bf1aa2a42e2b:ad1df0047d7f909c
impala error: No protocol version header
When I use the Oracle connection that I also have, I can generate a table chart from a large amount of records with no problem.
My setup is the following:
Impala v3.2.0-cdh6.3.3
Superset v0.36.0
So, is that a problem with Superset or Impala? Could have something to with configuration in Superset?
I am reading a table via snowflake reader node having less number of columns/attributes(around 50-80),the table is getting read on the Mosaic decisions Canvas. But when the attributes of table increases (approx 385 columns),Mosaic reader node fails. As a workaround I tried using the where clause with 1=2,in that case it is pulling the structure of the Table. But when I am trying to read the records even by applying the limit (only 10 records) to the query, it is throwing connection timeout Error.
Even I faced similar issue while reading (approx. 300 columns) table and I managed it with the help of input parameters available in Mosaic. In your case you will have to change the copy field variable to 1=1 used in the query at run time.
Below steps can be referred to achieve this -
Create a parameter (e.g. copy_variable) that will contain the default value 2 for the copy field variable
In reader node, write the SQL with 1 = $(copy_variable) So while validating, it’s same as 1=2 condition and it should validate fine.
Once validated and schema is generated, update the default value of $(copy_variable) to 1 so that while running, you will still get all records.
I want to create an alert that triggers whenever one of the following counter statistics is not zero:
a.b.c.failed
a.b.e.failed
I already use these statistics separately on a dashboard page, but as they occur rarely, I'd like an alert.
It appears I have to make a sum composite so that I can trigger the alert when the sum is above zero. I think the composite would look something like:
sum(series("a.b.*.failed",{}))
However, every attempt I make gives the error:
Unable to execute composite: ["error": "Requested MD data from SD endpoint"]
There is another thread that suggested replacing the {} with "*" (including the quotes). This no longer gives an error, but gives a bizarre result (it's above zero all the time, even though there only very rarely any 'failed' statistics above zero).
The correct expression for my case is:
sum(derive(series("a.b.*.failed","*")))
Using "*" works to select the source.
Derive gives the change of each statistic instead of the cumulative total (but I'm not sure why the cumulative total was showing up - it is not shown normally for these statistics).
Sum adds the change of the different statistics.
I don't understand why {} doesn't work - I think that is related to the mystery of the meaning the error message that uses undocumented terminology (MD and SD endpoints). Librato documentation of their composite statistics function language is very minimal and provides few examples and few explanations of the meaning of terms and technical foundations.
I have a pentaho transformation, which is used to read a text file, to check some conditions( from which you can have errors, such as the number should be a positive number). From this errors I'm creating an excel file and I need for my job the number of the lines in this error file plus to log which lines were with problem.
The problem is that sometimes I have an error " the return values id can't be found in the input row".
This error is not every time. The job is running every night and sometimes it can work without any problems like one month and in one sunny day I just have this error.
I don't think that this is from the file, because if I execute the job again with the same file it is working. I can't understand what is the reason to fail, because it is saying the value "id", but I don't have such a value/column. Why it is searching a value, which doesn't exists.
Another strange thing is that normally the step, which fails should be executed at all( as far as I know), because no errors were found, so we don't have rows at all to this step.
Maybe the problem is connected with the "Prioritize Stream" step? Here I'm getting all errors( which use exactly the same columns). I tried before the grouping steps to put a sorting, but it didn't help. Now I'm thinking to try with "Blocking step".
The problem is that I don't know why this happen and how to fix it. Any suggestions?
see here
Check if all your aggregates ins the Group by step have a name.
However, sometimes the error comes from a previous step: the group (count...) request data from the Prioritize Stream, and if that step has an error, the error gets reported mistakenly as coming from the group rather than from the Prioritze.
Also, you mention a step which should not be executed because there is no data: I do not see any Filter which would prevent rows with missing id to flow from the Prioritize to the count.
This is a bug. It happens randomly in one of my transformations that often ends up with empty stream (no rows). It mostly works, but once in a while it gives this error. It seems to only fail when the stream is empty though.
I am using Reporting Services 2012 and have a chart that uses a dataset that changes it's data based on parameters.
This data is just a bunch of periods formatted as YYYYMM an int, a machine number int, and numbers decimal(12,2). We select based on machine number and period and pull back all those numbers of decimal(12,2) and show them in the chart.
It works for most machines, but a few machines we pick we get the following error
An error occurred during local report processing. An error occurred during report processing. The processing of Parent for the chart 'chart1' cannot be performed. Cannot compare data of types System.Int32 and System.String. Please check the data type returned by the Parent.
A machine number that works is 516. One that doesn't is 517. Nothing is different in the returned SQL results from 516 and 517 besides different numbers, 5.23 instead of 5.17 as an example. There are no nulls in the data and no zeros, and definitely no strings.
Any help as to where to look next would be appreciated.
I don't know if this will be helpful or not, but the fix to eliminate the error was to change the SQL query to
cast(machno as varchar)
everywhere machno was in the query. This doesn't explain why the chart needed a string instead of an int.