I'm using Dynamic Reports to build huge PDF files (like 80.000 pages) and for now, the solution I found was to create intermediary files and merge them after processing. The last challenge for getting it done is to add page numbers, but the default counting obviously get messed up after merging. So I need some way to set the starting page number when creating the temp PDF files. The three methods available don't allow page setting. Is it possible? How do I do it?
Thanks in advance.
Yes, it is possible. Although it's hard to find in the documentation, the report() has exactly what is needed: the method
report.setStartPageNumber(int)
As stated by Ricardo in the DynamicReports forum.
Related
I am trying to create an Age Analysis for Creditors using a dynamic date slicer.
I followed each individual step specified on David Churchward's Blog, but I'm not able to replicate what he suggested there.
Herewith is the result of what I tried:
I'm expecting to see these values each in their own Ageing bucket based on what is outstanding.
Please download my PBIX file to see for yourself, then please advise what I did wrong.
The Excel source for PBIX is also in the folder.
Thank you.
The blog that you're referring is quite old and DAX has changed a lot since then.
Additionally PowerBI now has a in-built feature called binning which can do something similar to what you're looking for.
I was able to generate the below output using that feature which automatically groups the data based on the bin size.
There also a related feature called "Grouping" where you can manually choose the groups and their range. If you're up for it you can use this too. Below is the output for that:
I uploaded the file with these changes in the same folder.
Another resource that might be helpful for you is Radacad's article on dynamic banding
I need to design a pipeline using Nifi, but I have some questions as I am thinking between two approaches and I am unsure which processors to use, so maybe you can help me.
The scenario is the following: I need to ingest some .csv files into my HDFS, those do not contain a date I want to use to partition the Hive tables I will later use, so I thought of two options:
At some point during the .csv treatment, create some kind of code snippet that is launched from Nifi to modify the .csv file adding the column with the date.
Create a temporary (internal?) table on hive, alter the table adding the column and finally add it to the table where I partition by date.
I am unsure which option is better (memory-wise, simplicity, resource management) or maybe if its even possible, or even if there is a better way to do it. Also I am unsure of which are the Nifi processors to use.
So any help is appreciated guys, thanks.
You should be able to do #1 easily in NiFi without writing any code :)
The steps would be something like this:
Source processor to get your CSV from somewhere, probably GetFile
UpdateAttribute to add an attribute for the current date
UpdateRecord with a CsvReader and CsvWriter, adds a new date field
with the value from #2
I've created an example of how to do this and posted the template here:
https://gist.githubusercontent.com/bbende/113f8fa44250c09a5282d04ee600cd09/raw/c6fe8b1b9f31bb106f9c816e4fd5ea90ebe19f80/CsvAddDate.xml
Save that xml file and use the palette on the left of NiFi canvas to upload it as a template. Then instantiate the template from the top toolbar by dragging on the template icon.
I have a simple query regarding the Wikimedia/Wikipedia API.
I have to fetch the changes made from a list of "revids". I am able to fetch the XML content for a batch of "revids", but I failed to extract only the changed text.
Does API provide any way to extract only the changed sentences? If not any external script/module that can do this job?
Query to fetch the revision details: https://en.wikipedia.org/w/api.php?action=query&prop=info|revisions&rvprop=user|userid|ids|tags|comment|content&format=jsonfm&revids=1228415
I would appreciate any suggestions/solutions that could solve this issue!
(Currently, I am using the Wikitools python module to make the queries)
You can get the diff between the old and new text with action=compare, but it segments text by wikitext lines, not sentences, isn't meant to be machine-readable, and is generally not that helpful. Since you are using Python, the client-side library deltas will probably work better for you.
This is going to be vague, hopefully not annoyingly so. I know very little about SharePoint, but I'm asking for someone who's more knowledgable but is under lots of crippling pressure. Unfortunately I'm going to be held responsible for the project (it's due before Christmas!!), so I need to see what I can figure out on my own to help out. Please allow my desperation and helplessness to excuse any problems with this question.
We've created an InfoPath form that generates xml files that will be uploaded to SharePoint. The data from these files will be aggregated and used to generate reports. The biggest issue is that the users will be spread out over three locations, and the info generated from each location needs to be firewalled from the others. But we need the xml files from all three locations to go to the same place in order to make the aggregation feasible with minimal manual work.
I've read something about SharePoint groups (http://technet.microsoft.com/en-us/library/cc262778%28v=office.14%29.aspx) and figured that might be the way of doing it, so long as 1) the xml documents could all go to the same library/repository and 2) that shared repository would only show each group their own documents. For at least two users we also need a master view that shows all of the documents regardless of the group that created them.
That's the main question. Ultimately we'll also need a similar way of storing the generated reports (tables and charts) to the creators of the xml files AND a set of users at each location who won't be able to view or create those xml files. But first things first, I guess.
Is this possible and feasible? Any hints/links that could get us started down this path?
I think in your case the best option is to create a folder for each group, and set permissions on them to allow just the specific group of users to access that folder. The same with a separate library for reports. Then, you'd just setup a list view that flattens the folder hierarchy to view all items at once.
You could also set per-document permission programmatically in an event receiver, however, there's a pretty low limit (search for ACL) on the number of unique access control lists per library (it's 50.000 actually). So depending on the number of XMLs you are going to manage you may reach this limit.
SSRS newbie question here...
I have a table where one column is varbinary(max) data. I would like to make a report that makes this data available for download as a hyperlink so the user can just click on the item and get a file download dialog for the binary data. In this particular case, the binary data happens to be the content of old pdf files, but that shouldn't matter.
I tried searching around but I can't find any pointers on how to do this. It seems to me that it should be possible. There are ways to display images in a report using varbinary data, so it makes sense that one should be able to make arbitrary binary data downloadable on a report, right?
No, it is not possible as far as I can tell. Don't see anyway to do this.
The work-around that I used was to create a simple asp.net page to serve the binary content through some URL. I then hyperlinked to that page from the SSRS report giving the right variables in the URL. Works fine for me, YMMV if you have to worry about URL hacking or security issues.