Is it possible to generated a "Print When Expression" that detects the last element in an XML datasource file?
Basically I have a report with a column break inserted after a sub-report in a detail band so I can clearly define new pages for the beginning of a new record. But it always leaves me with a blank last page. So I am hoping that I can prevent this if I have a print when condition that prevents the column break if it is the last record element in the XML datasource.
Is this even possible?
The problem is that you don't know it's the last element until after you look for the next element. I don't think there is a simple way.
In principle it should be fine to do something like this:
Create a super-report around the entire report. Run the same query in the super-report. Count the rows. Then pass the number of rows to the original report (which is now a subreport) and re-run the query again. Clearly, running the query twice is another drawback.
If the data source were SQL, then I would suggest modifying the SQL to return the number of rows as part of the result set. But for non-SQL data sources, you need some way of knowing the number of rows (well... some way of identifying the last row) before you reach the last row.
Many years late...
if you sure your datasource is a JRBeanCollectionDataSource, you could use:
$V{REPORT_COUNT} == ((net.sf.jasperreports.engine.data.JRBeanCollectionDataSource)ORIGINAL_DATA_SOURCE( )).getData().size()
Related
I'm still relatively new to SQL and Pentaho.
I've pulled a table with two different IDs and need to run a query for each specific instance.
For example,
SELECT *
FROM Table
WHERE RecordA = 'value in column A'
AND RecordB = 'value in column B'
I need the results back, either appended to new columns in the original table or part of their own text file output.
I was initially looking at using a formula for this inside of Pentaho, but couldn't quite figure it out. Since I have the query written I threw it into Excel and got the concatenated results (so a string of 350 or so queries that I need to run). I'm just not sure how to accomplish this - I tried the Execute SQL Script inside of Pentaho but it doesn't seem to do output?
Any direction would be useful. I've searched a little but have come up short so far, possibly because I am still pretty new to this platform.
You can accomplish this behavior in a lot of ways, with a "Database Lookup" step for example, but I usually do that in a quite easy way and here is a example for your tests, I hope it helps.
The idea here is to have two Table input steps, the first one will fetch the IDs we want to look at. For example you may use a SQL query similar to note on the left. The result will be a 1 column stream of rows.
Next we have a Table Input that reads the rows received and executes it's query for each row. I'll add a screenshot with the options that I selected.
What it does is replace a placeholder '?' with the data that is received. If you need two columns use two '?' but remember that it will replace the first one with the first column and the second one with the second column
And you are good to go. Test it a couple of times and good luck.
And the config for the second table input.
I have a query that will be populating a form and then the form will allow the data set to be edited. The issue I am having is that the query is pulling the last row that is normally used to add a new record. This results in having a row that looks shows all fields as blank, but leaves one field with "null".
I played around with the query and was able to find a workaround by selecting "distinct" records, problem is that when you select with distinct, you cannot edit the data set. Is there any other way around this?
I can upload an example of the database if needed.
Thanks!
edit: picture to show the issue: https://imgbomb.com/i/?rO1sp
I'm trying to process some data and store it in a datawarehouse. For doing it, I wanted to store dimensions in one transformation and fact (only have one) in another transformation. So I can use a job for execute the first one, copy rows to result and get them into the second transformation.
In the first transformation, I read some Excel file and separate this data into some streams. It is data from a baptism, so I have one stream for the person, another one for parents, another one for sponsors, and so on... At the end of each stream, I insert data into database and return PK autogenerated (it is an id autoincrement).
In the second one, I only have Get rows from result and want to set them into a txt file (just for see it is been done correctly). The problem is that the file is created but it is empty. I suppose that if I let fields in Get rows from result empty, it gets all fields.
What am I doing wrong?
At the end what I want is to have one Copy rows to result at the end of each stream in the first transformation and get all this data in the second one.
In "Insert Pare Padrina" I return id_pare_padrina which is autogenerated, and the same with "Insert Mare Padrina" (I have more streams which I also have to include them into result). This transformation is not executed per row because I need values of other rows.
Thank you!
In order to pass the data from the first transformation to the second transformation, you need to set certain parameters like:
1. First of all, in the transformation settings of the second transformation (at the Job Level), check on the items as image below:
Copy Previous results to parameters will ensure that all the results/data in the "Copy Rows to Result" step is getting properly passed to the next level.
Execute for every input row : will execute the second transformation for every rows in the first transformation file. This is optional based on your requirement.
2. In the same transformation settings, define the "Parameters" in the Parameters tabs. Check the image below:
Here, NAME is the parameter i have defined. So when you are using the "Get rows from result", you can define these parameter names.
3. Instead of using "Get rows from result", you can alternately use "Get Variables" step to fetch all the variables coming from the previous step. All you need to do is to define the parameter names inside the ktr file (CTRL + T). (Actually i have practically implemented in that fashion and it worked for me.)
4. Since "copy rows to result" step uses heap memory, defining multiple instances of this step might exhaust the memory space quickly and your code might fall in trouble. Ideally use a single instance of this step.
But if your data interation is only one row, best option would be to use "set variables" step.
I assume you might have missed some of these sections in the job.
You can read more on copy rows to result in here.
Hope it helps :)
I have a problem during creating a crystal report which will show a part of one manufacturing process.
So, I need your help....
I have a four different components that forms a one bigger component (or product). Every of this small components pass through different production operations, but the same component don't pass through all production production operations.
And, I need a crystal report which shows every article (component) with a number of finished component in each operation.
Here is the example of SQL result (ordery by operations):
so, you can see that article with articleID = '29183' is going through first and last operation... also, articleID = '17275' is going through the second and last operation... I think that is all clear from the picture...
And, all I need is report that this will show in the columns like this:
In the report, I made a group by ArticleID, so the article (component) appears only in one row... And after that, I need a values in columns (columns for operations) which correspond to every article...
Very thanks... I try this solve for a few days, but I don't know how to solve this... I tried crosstabs, dictionaries, lists but nothing helps me
You've already grouped by Article ID so that each article has its own single line, so that's a good start. Now, you just need to separate the 3 operations with 3 formulas and aggregate them to the group level.
For example, the formula for the first operation would look like:
//If row is a "first operation" then display the finished data element
if {table.Operation}="first operation" then {table.Finished}
Then to display in the Group Footer you could just use a max() summary function on the formula you just created.
I have a few reports built using Report Builder 3 for MSSQL 2008 Reporting Services.
Some fields in my report are showing "#Error", instead of this I want to show only a simple "-". Is there any built-in function or custom code to overcome this?
I'd still really like to see your formula but you seem determined not to show it, so I'll take a wild stab at answering without it. I imagine that you are doing something like dividing the field on the current row by the field on the previous row. However, this would give you Infinity on the first line rather than #Error so there is something else going on. But let's run with this anyway since we don't have your formula.
The most common way to solve this is to check for Nothing being returned for the Previous function, usually indicating that you are on the first row (assuming your field always has data). This has the advantage of also working on fields that are not guaranteed to have a value.
=IIF(IsNothing(Previous(Fields!MyField.Value)), "-", Fields!MyField.Value / Previous(Fields!MyField.Value))
Here is another way you could do it using the row number, which will always check for the first row regardless:
=IIF(RowNumber(Nothing) = 1, "-", Fields!MyField.Value / Previous(Fields!MyField.Value))
This assumes that the error is being caused by the Value formula and not by some other mechanism such as applying an expression to other properties like Format, Color which is invalid when there is no previous row.