Pentaho report.setQuerty - pdf

I'm able to generate a true report by pre-configuring the (dbconntion + queries and passing parameters) within the prpt and calling it from java. It's working fine.
Problem: I am tring to use report.setQuery("dummyQuery","SELECT NAME,ID FROM test.person");//some query to dynamically generate report. Is this possible?
When I use report.setQuery with same query, my report generates a blank pdf.
I think I should configure an HQL datasource to achieve this.
pre-config PRPT - I'm displaying 2 values, NAME and ID.
I'm able to generate this pdf dynamically through passing parameters.
Please guys help to generate this pdf dynamically thourgh setQuery or "how to use setQuery".
I really want to learn generating pentaho pdf reports.
package com.report;
import java.io.File;
import java.io.IOException;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.pentaho.reporting.engine.classic.core.ClassicEngineBoot;
import org.pentaho.reporting.engine.classic.core.MasterReport;
import org.pentaho.reporting.engine.classic.core.modules.output.pageable.pdf.PdfReportUtil;
import org.pentaho.reporting.libraries.resourceloader.Resource;
import org.pentaho.reporting.libraries.resourceloader.ResourceManager;
/**
* Servlet implementation class Generate
*/
#WebServlet("/Generate")
public class Generate extends HttpServlet {
private static final long serialVersionUID = 1L;
/**
* #see HttpServlet#HttpServlet()
*/
public Generate() {
super();
// TODO Auto-generated constructor stub
}
/**
* #see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
* response)
*/
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
// TODO Auto-generated method stub
try{
response.reset();
response.setContentType("application/pdf");
ClassicEngineBoot.getInstance().start();
ResourceManager manager = new ResourceManager();
manager.registerDefaults();
String PrptPath ="C:\\Users\\3692902\\Desktop\\pentahoTest.prpt";
//generate report through pre-configured prpt (db connection + query)
Resource res = manager.createDirectly(new File(PrptPath), MasterReport.class);
MasterReport report = (MasterReport) res.getResource();
report.getParameterValues().put("IDValue",101);
//generate report through query?????
//report.setQuery("SELECT ID FROM test.person");
PdfReportUtil.createPDF(report,response.getOutputStream());
}
catch(Exception e){
e.printStackTrace();
}
}
/**
* #see HttpServlet#doPost(HttpServletRequest request, HttpServletResponse
* response)
*/
protected void doPost(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
// TODO Auto-generated method stub
}
}

You are loading the report from a PRPT file. So why dont you predefine your query in the Report-Designer by adding a SQL-Data-Source (Menu: Data->Add Data Source->SQL)?
Then, assuming you want to filter by IDValue, you would set your query (inside PRD, NOT your code) to:
SELECT ID FROM test.person WHERE ID = ${IDValue}

That's the "hardest" way to generate a report with reporting engine embedded in java.
Download Pentaho Report Designer matching the version of the engine you are using (eg: Pentaho Report Designer 5.4, Pentaho Reporting Engine 5.4).
From Pentaho Report designer, create a new report, and start creating it (1. Creating your first report)create your connections (or any datasource you need), create your queries, and define your parameters.
Finally, use your own code above to call the report from the servlet and display it.
Remember, Pentaho Report Designer is where you define ALL what you need in the report, then just send parameters and generate it from everywhere!.

Related

Netty client server login, how to have channelRead return a boolean

I'm writing client server applications on top of netty.
I'm starting with a simple client login server that validates info sent from the client with the database. This all works fine.
On the client-side, I want to use If statements once the response is received from the server if the login credentials validate or not. which also works fine. My problem is the ChannelRead method does not return anything. I can not change this. I need it to return a boolean which allows login attempt to succeed or fail.
Once the channelRead() returns, I lose the content of the data.
I tried adding the msg to a List but, for some reason, the message data is not stored in the List.
Any suggestions are welcome. I'm new... This is the only way I've figured out to do this. I have also tried using boolean statements inside channelRead() but these methods are void so once it closes the boolean variables are cleared.
Following is the last attempt I tried to insert the message data into the list I created...
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import java.util.Collection;
import java.util.Iterator;
import java.util.List;
import java.util.ListIterator;
public class LoginClientHandler extends ChannelInboundHandlerAdapter {
Player player = new Player();
String response;
public volatile boolean loginSuccess;
// Object message = new Object();
private Object msg;
public static final List<Object> incomingMessage = new List<Object>() {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
// incomingMessage.clear();
response = (String) msg;
System.out.println("channel read response = " + response);
incomingMessage.add(0, msg);
System.out.println("incoming message = " + incomingMessage.get(0));
}
How can I get the message data "out" of the channelRead() method or use this method to create a change in my business logic? I want it to either display a message to tell the client login failed and try again or to succeed and load the next scene. I have the business logic working fine but I can't get it to work with netty because none of the methods return anything I can use to affect my business logic.
ChannelInitializer
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import io.netty.handler.codec.DelimiterBasedFrameDecoder;
import io.netty.handler.codec.Delimiters;
import io.netty.handler.codec.string.StringDecoder;
import io.netty.handler.codec.string.StringEncoder;
public class LoginClientInitializer extends ChannelInitializer <SocketChannel> {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()));
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
pipeline.addLast("handler", new LoginClientHandler());
}
}
To get the server to write data to the client, call ctx.write here is a basic echo server and client example from the Netty in Action book. https://github.com/normanmaurer/netty-in-action/blob/2.0-SNAPSHOT/chapter2/Server/src/main/java/nia/chapter2/echoserver/EchoServerHandler.java
There are several other good examples in that repo.
I highly recommend reading the "netty in action" book if you're starting out with netty. It will give you a solid foundational understanding of the framework and how it's intended to be used.

Can I trigger dataimport from Spring Solr integration

I am trying to write a quick class to trigger the data import on solr. I know I can just use HttpClient, but I've already got Spring-Data-Solr configured and it has the server configured etc.
Is it possible to use the Query interface and the Solr Template to just send a request to dataimport request handler with "command=full-import" as params?
How can I do that?
If you have access to SolrTemplate instance, you could execute a SolrCallback as follows:
solrTemplate.execute(new SolrCallback<Void>() {
#Override
public Void doInSolr(SolrServer solrServer) throws SolrServerException, IOException {
ModifiableSolrParams params = new ModifiableSolrParams();
params.set("qt", "/dataimport");
params.set("command", "full-import");
solrServer.query(params);
return null;
}
});

Tess4J doOCR() for *First Page* of pdf / tif

Is there a way to tell Tess4J to only OCR a certain amount of pages / characters?
I will potentially be working with 200+ page PDF's, but I really only want to OCR the first page, if that!
As far as I understand, the common sample
package net.sourceforge.tess4j.example;
import java.io.File;
import net.sourceforge.tess4j.*;
public class TesseractExample {
public static void main(String[] args) {
File imageFile = new File("eurotext.tif");
Tesseract instance = Tesseract.getInstance(); // JNA Interface Mapping
// Tesseract1 instance = new Tesseract1(); // JNA Direct Mapping
try {
String result = instance.doOCR(imageFile);
System.out.println(result);
} catch (TesseractException e) {
System.err.println(e.getMessage());
}
}
}
Would attempt to OCR the entire, 200+ page into a single String.
For my particular case, that is way more than I need it to do, and I'm worried it could take a very long time if I let it do all 200+ pages and then just substring the first 500 or so.
The library has a PdfUtilities class that can extract certain pages of a PDF.

Bigquery: Extract Job does not create file

I am working on a Java application which uses Bigquery as the analytics engine. Was able to run query jobs (and get results) using the code on Insert a Query Job. Had to modify the code to use service account using this comment on stackoverflow.
Now, need to run an extract job to export a table to a bucket on GoogleStorage. Based on Exporting a Table, was able to modify the Java code to insert extract jobs (code below). When run, the extract job's status changes from PENDING to RUNNING to DONE. The problem is that no file is actually uploaded to the specified bucket.
Info that might be helpful:
The createAuthorizedClient function returns a Bigquery instance and works for query jobs, so probably no issues with the service account, private key etc.
Also tried creating and running the insert job manually on google's api-explorer and the file is successfully created in the bucket. Using the same values for project, dataset, table and destination uri as in code so these should be correct.
Here is the code (pasting the complete file in case somebody else finds this useful):
import java.io.File;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Arrays;
import java.util.List;
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson.JacksonFactory;
import com.google.api.services.bigquery.Bigquery;
import com.google.api.services.bigquery.Bigquery.Jobs.Insert;
import com.google.api.services.bigquery.BigqueryScopes;
import com.google.api.services.bigquery.model.Job;
import com.google.api.services.bigquery.model.JobConfiguration;
import com.google.api.services.bigquery.model.JobConfigurationExtract;
import com.google.api.services.bigquery.model.JobReference;
import com.google.api.services.bigquery.model.TableReference;
public class BigQueryJavaGettingStarted {
private static final String PROJECT_ID = "123456789012";
private static final String DATASET_ID = "MY_DATASET_NAME";
private static final String TABLE_TO_EXPORT = "MY_TABLE_NAME";
private static final String SERVICE_ACCOUNT_ID = "123456789012-...#developer.gserviceaccount.com";
private static final File PRIVATE_KEY_FILE = new File("/path/to/privatekey.p12");
private static final String DESTINATION_URI = "gs://mybucket/file.csv";
private static final List<String> SCOPES = Arrays.asList(BigqueryScopes.BIGQUERY);
private static final HttpTransport TRANSPORT = new NetHttpTransport();
private static final JsonFactory JSON_FACTORY = new JacksonFactory();
public static void main (String[] args) {
try {
executeExtractJob();
} catch (Exception e) {
e.printStackTrace();
}
}
public static final void executeExtractJob() throws IOException, InterruptedException, GeneralSecurityException {
Bigquery bigquery = createAuthorizedClient();
//Create a new Extract job
Job job = new Job();
JobConfiguration config = new JobConfiguration();
JobConfigurationExtract extractConfig = new JobConfigurationExtract();
TableReference sourceTable = new TableReference();
sourceTable.setProjectId(PROJECT_ID).setDatasetId(DATASET_ID).setTableId(TABLE_TO_EXPORT);
extractConfig.setSourceTable(sourceTable);
extractConfig.setDestinationUri(DESTINATION_URI);
config.setExtract(extractConfig);
job.setConfiguration(config);
//Insert/Execute the created extract job
Insert insert = bigquery.jobs().insert(PROJECT_ID, job);
insert.setProjectId(PROJECT_ID);
JobReference jobId = insert.execute().getJobReference();
//Now check to see if the job has successfuly completed (Optional for extract jobs?)
long startTime = System.currentTimeMillis();
long elapsedTime;
while (true) {
Job pollJob = bigquery.jobs().get(PROJECT_ID, jobId.getJobId()).execute();
elapsedTime = System.currentTimeMillis() - startTime;
System.out.format("Job status (%dms) %s: %s\n", elapsedTime, jobId.getJobId(), pollJob.getStatus().getState());
if (pollJob.getStatus().getState().equals("DONE")) {
break;
}
//Wait a second before rechecking job status
Thread.sleep(1000);
}
}
private static Bigquery createAuthorizedClient() throws GeneralSecurityException, IOException {
GoogleCredential credential = new GoogleCredential.Builder()
.setTransport(TRANSPORT)
.setJsonFactory(JSON_FACTORY)
.setServiceAccountScopes(SCOPES)
.setServiceAccountId(SERVICE_ACCOUNT_ID)
.setServiceAccountPrivateKeyFromP12File(PRIVATE_KEY_FILE)
.build();
return Bigquery.builder(TRANSPORT, JSON_FACTORY)
.setApplicationName("My Reports")
.setHttpRequestInitializer(credential)
.build();
}
}
Here is the output:
Job status (337ms) job_dc08f7327e3d48cc9b5ba708efe5b6b5: PENDING
...
Job status (9186ms) job_dc08f7327e3d48cc9b5ba708efe5b6b5: PENDING
Job status (10798ms) job_dc08f7327e3d48cc9b5ba708efe5b6b5: RUNNING
...
Job status (53952ms) job_dc08f7327e3d48cc9b5ba708efe5b6b5: RUNNING
Job status (55531ms) job_dc08f7327e3d48cc9b5ba708efe5b6b5: DONE
It is a small table (about 4MB) so the job taking about a minute seems ok. Have no idea why no file is created in the bucket OR how to go about debugging this. Any help would be appreciated.
As Craig pointed out, printed the status.errorResult() and status.errors() values.
getErrorResults(): {"message":"Backend error. Job aborted.","reason":"internalError"}
getErrors(): null
It looks like there was an access denied error writing to the path: gs://pixalate_test/from_java.csv. Can you make sure that the user that was performing the export job has write access to the bucket (and that the file doesn't already exist)?
I've filed an internal bigquery bug on this issue ... we should give a better error in this situation.
.
I believe the problem is with the bucket name you're using -- mybucket above is just an example, you need to replace that with a bucket you actually own in Google Storage. If you've never used GS before, the intro docs will help.
Your second question was how to debug this -- I'd recommend looking at the returned Job object once the status is set to DONE. Jobs that end in an error still make it to DONE state, the difference is that they have an error result attached, so job.getStatus().hasErrorResult() should be true. (I've never used the Java client libraries, so I'm guessing at that method name.) You can find more information in the jobs docs.
One more difference, I notice is you are not passing job type as config.setJobType(JOB_TYPE);
where constant is private static final String JOB_TYPE = "extract";
also for json, need to set format as well.
I had the same problem. But it turned out was that I typed the name of the table wrong. However, Google did not generate an error message saying that "the table does not exists." That would have helped me locate my problem.
Thanks!

How do I create a custom directive for Apache Velocity

I am using Apache's Velocity templating engine, and I would like to create a custom Directive. That is, I want to be able to write "#doMyThing()" and have it invoke some java code I wrote in order to generate the text.
I know that I can register a custom directive by adding a line
userdirective=my.package.here.MyDirectiveName
to my velocity.properties file. And I know that I can write such a class by extending the Directive class. What I don't know is how to extend the Directive class -- some sort of documentation for the author of a new Directive. For instance I'd like to know if my getType() method return "BLOCK" or "LINE" and I'd like to know what should my setLocation() method should do?
Is there any documentation out there that is better than just "Use the source, Luke"?
On the Velocity wiki, there's a presentation and sample code from a talk I gave called "Hacking Velocity". It includes an example of a custom directive.
Also was trying to come up with a custom directive. Couldn't find any documentation at all, so I looked at some user created directives: IfNullDirective (nice and easy one), MergeDirective as well as velocity build-in directives.
Here is my simple block directive that returns compressed content (complete project with some directive installation instructions is located here):
import java.io.IOException;
import java.io.StringWriter;
import java.io.Writer;
import org.apache.velocity.context.InternalContextAdapter;
import org.apache.velocity.exception.MethodInvocationException;
import org.apache.velocity.exception.ParseErrorException;
import org.apache.velocity.exception.ResourceNotFoundException;
import org.apache.velocity.exception.TemplateInitException;
import org.apache.velocity.runtime.RuntimeServices;
import org.apache.velocity.runtime.directive.Directive;
import org.apache.velocity.runtime.parser.node.Node;
import org.apache.velocity.runtime.log.Log;
import com.googlecode.htmlcompressor.compressor.HtmlCompressor;
/**
* Velocity directive that compresses an HTML content within #compressHtml ... #end block.
*/
public class HtmlCompressorDirective extends Directive {
private static final HtmlCompressor htmlCompressor = new HtmlCompressor();
private Log log;
public String getName() {
return "compressHtml";
}
public int getType() {
return BLOCK;
}
#Override
public void init(RuntimeServices rs, InternalContextAdapter context, Node node) throws TemplateInitException {
super.init(rs, context, node);
log = rs.getLog();
//set compressor properties
htmlCompressor.setEnabled(rs.getBoolean("userdirective.compressHtml.enabled", true));
htmlCompressor.setRemoveComments(rs.getBoolean("userdirective.compressHtml.removeComments", true));
}
public boolean render(InternalContextAdapter context, Writer writer, Node node)
throws IOException, ResourceNotFoundException, ParseErrorException, MethodInvocationException {
//render content to a variable
StringWriter content = new StringWriter();
node.jjtGetChild(0).render(context, content);
//compress
try {
writer.write(htmlCompressor.compress(content.toString()));
} catch (Exception e) {
writer.write(content.toString());
String msg = "Failed to compress content: "+content.toString();
log.error(msg, e);
throw new RuntimeException(msg, e);
}
return true;
}
}
Block directives always accept a body and must end with #end when used in a template. e.g. #foreach( $i in $foo ) this has a body! #end
Line directives do not have a body or an #end. e.g. #parse( 'foo.vtl' )
You don't need to both with setLocation() at all. The parser uses that.
Any other specifics i can help with?
Also, have you considered using a "tool" approach? Even if you don't use VelocityTools to automatically make your tool available and whatnot, you can just create a tool class that does what you want, put it in the context and either have a method you call to generate content or else just have its toString() method generate the content. e.g. $tool.doMyThing() or just $myThing
Directives are best for when you need to mess with Velocity internals (access to InternalContextAdapter or actual Nodes).
Prior to velocity v1.6 I had a #blockset($v)#end directive to be able to deal with a multiline #set($v) but this function is now handled by the #define directive.
Custom block directives are a pain with modern IDEs because they don't parse the structure correctly, assuming your #end associated with #userBlockDirective is an extra and paints the whole file RED. They should be avoided if possible.
I copied something similar from the velocity source code and created a "blockset" (multiline) directive.
import org.apache.velocity.runtime.directive.Directive;
import org.apache.velocity.runtime.RuntimeServices;
import org.apache.velocity.runtime.parser.node.Node;
import org.apache.velocity.context.InternalContextAdapter;
import org.apache.velocity.exception.MethodInvocationException;
import org.apache.velocity.exception.ResourceNotFoundException;
import org.apache.velocity.exception.ParseErrorException;
import org.apache.velocity.exception.TemplateInitException;
import java.io.Writer;
import java.io.IOException;
import java.io.StringWriter;
public class BlockSetDirective extends Directive {
private String blockKey;
/**
* Return name of this directive.
*/
public String getName() {
return "blockset";
}
/**
* Return type of this directive.
*/
public int getType() {
return BLOCK;
}
/**
* simple init - get the blockKey
*/
public void init( RuntimeServices rs, InternalContextAdapter context,
Node node )
throws TemplateInitException {
super.init( rs, context, node );
/*
* first token is the name of the block. I don't even check the format,
* just assume it looks like this: $block_name. Should check if it has
* a '$' or not like macros.
*/
blockKey = node.jjtGetChild( 0 ).getFirstToken().image.substring( 1 );
}
/**
* Renders node to internal string writer and stores in the context at the
* specified context variable
*/
public boolean render( InternalContextAdapter context, Writer writer,
Node node )
throws IOException, MethodInvocationException,
ResourceNotFoundException, ParseErrorException {
StringWriter sw = new StringWriter(256);
boolean b = node.jjtGetChild( 1 ).render( context, sw );
context.put( blockKey, sw.toString() );
return b;
}
}