First of all, thank you for the time you are taking to read my question.
I have a PDF file with a part that, when you fill it, it populates other parts of the PDF.
Basically, you enter your last name and it populates the parts of the PDF that need the last name.
I need to fill the last name with a PHP script that takes the last name from a HTML form. I need to keep the auto-fill feature enabled.
The field_name for the last name is:
FieldType: Text
FieldName: MCSA-5875[0].Page1[0].driverPersonal[0].nameLast[0]
FieldNameAlt: Enter the driver's last name.
FieldFlags: 0
FieldJustification: Left
The FDF file I created to fill the form is:
%FDF-1.2
%,,oe"
1 0 obj
<<
/FDF << /Fields [<</T(MCSA-5875[0].Page1[0].driverPersonal[0].nameLast[0])/V(Smith)>>"] >> >>
endobj
trailer
<</Root 1 0 R>>
%%EOF;
Where "Smith" is a sample last name.
When I run the following command (to fill the PDF form):
pdftk form.pdf fill_form output.fdf output output.pdf
I get the following error:
Unhandled Java Exception in create_output():
java.lang.ClassCastException: pdftk.com.lowagie.text.pdf.PdfLiteral cannot be cast to pdftk.com.lowagie.text.pdf.PdfDictionary
at 0x0059a84e (Unknown Source)
at 0x0059ad42 (Unknown Source)
at 0x005e9bd4 (Unknown Source)
at 0x005ba4a4 (Unknown Source)
at 0x005b2044 (Unknown Source)
at 0x0059231e (Unknown Source)
at 0x004721bd (Unknown Source)
at 0x00472562 (Unknown Source)
at 0x00472045 (Unknown Source)
at 0x004df3e2 (Unknown Source)
at 0x004df38a (Unknown Source)
at 0x00471e74 (Unknown Source)
Can you help me to find a solution to this problem?
Thanks in advance
Well, I did it! I just used the following open-source solution: https://github.com/Tadelsucht/BulkPDF
We filled the csv template on demand using an HTML form and a PHP backend.
We, then, called the application trough command-line using PHP
Related
I am working on a Spark project, Here i had one file which is in parquet format when I try to load this file using java it gives me the below error. But when I loaded the same file in hive with the same path and write a query select * from table_name, so its working fine and data is also coming properely. Please help me regarding this issue.
java.io.IOException: Could not read footer:
java.lang.RuntimeException: corrupted file: the footer index is not
within the file at
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:247)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$28.apply(ParquetRelation.scala:754)
at
org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$28.apply(ParquetRelation.scala:743)
at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264) at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at
org.apache.spark.scheduler.Task.run(Task.scala:88) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.lang.RuntimeException: corrupted file: the footer index is not
within the file at
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:427)
at
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:237)
at
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:233)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
You can try below options
1) sqlContext.read.parquet("path")
2) sqlContext.read.format(fileFormat)
.option("header", header) // Use first line of all files as header
.option("inferSchema", inferSchema) // Automatically infer data types
.load(source)
If your issue didn't resolved, please post the sample of code.
im trying to index pdf files in lucene 6.6.0 and pdfbox 2.0.7
im getting some following errors. (EDITED)
run:
Indexing ke folder: 'D:\Kuliah\rancangan document indexing\dir-index\'...
Indexing PDF document: D:\Kuliah\rancangan document indexing\dir-pdf\dua.pdf
Exception in thread "main" java.lang.ExceptionInInitializerError
at tigasepuluh.Playground.indexDocs(Playground.java:110)
at tigasepuluh.Playground.indexDocs(Playground.java:88)
at tigasepuluh.Playground.main(Playground.java:65)
Caused by: java.lang.RuntimeException: Uncompilable source code - Erroneous sym type: org.apache.lucene.document.FieldType.setIndexed
at org.apache.pdfbox.examples.lucene.LucenePDFDocument.<clinit>(LucenePDFDocument.java:123)
... 3 more
C:\Users\abc\AppData\Local\NetBeans\Cache\8.2\executor-snippets\run.xml:53: Java returned: 1
BUILD FAILED (total time: 5 seconds)
And this is github link to my complete code
my complete code
Change this line in your copy of org.apache.pdfbox.examples.lucene.LucenePDFDocument:
TYPE_STORED_NOT_INDEXED.setIndexed(false);
to
TYPE_STORED_NOT_INDEXED.setIndexOptions(IndexOptions.NONE);
The problem you had is because the PDFBox example was made for lucene 4.
I have a directory which contains 10 files and I want to remove the headers from the files present in directory and while executing using piggybank, I am getting an error. Is there any other way which can remove header from all the files present in a directory.My code is:-
REGISTER /usr/lib/pig/piggybank.jar;
input = LOAD 'insurance_data' using CSVExcelStorage(
',','default','NOCHANGE','SKIP_INPUT_HEADER')
as (population:int, private:int,public:int,uninsecured:int);
dump input;
The error which I am getting is :-
2016-09-13 14:01:48,239 [main] ERROR org.apache.pig.PigServer -
exception during parsing: Error during parsing. mismatched input 'input' expecting EOF Failed to parse:
mismatched input 'input' expecting
EOF at
org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:241)
at
org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:179)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1688) at
org.apache.pig.PigServer$Graph.access$000(PigServer.java:1421) at
org.apache.pig.PigServer.parseAndBuild(PigServer.java:354) at
org.apache.pig.PigServer.executeBatch(PigServer.java:379) at
org.apache.pig.PigServer.executeBatch(PigServer.java:365) at
org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
at
org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:769)
at
org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) at
org.apache.pig.Main.run(Main.java:613) at
org.apache.pig.Main.main(Main.java:158) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606) at
org.apache.hadoop.util.RunJar.run(RunJar.java:221) at
org.apache.hadoop.util.RunJar.main(RunJar.java:136) 2016-09-13
14:01:48,250 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR
1200: mismatched input 'input'
expecting EOF Details at logfile: /home/cloudera/pig_1473800504430.log
'input' is a keyword.Reference here.Change the name of the relation 'input' to something else.
REGISTER /usr/lib/pig/piggybank.jar;
A = LOAD 'insurance_data' USING CSVExcelStorage(',','default','NOCHANGE','SKIP_INPUT_HEADER') as (population:int, private:int,public:int,uninsecured:int);
DUMP A;
I have a file stored in HDFS at this path: /user/hdfs/countries
(the file is in comma separated format).
To import this HDFS data into PIG I ran the below command in PIG:
test = load ‘/ user/hdfs/countries’ using PigStorage(',') as (id:int, Name:chararray, Language:chararray);
where,
ID: is the primary key column in HDFS file
Name and Language are the column names in HDFS file
I am getting below error when I run the above mentioned pig command:
Pig Stack Trace
ERROR 1200: <line 1, column 12> Unexpected character ''
Failed to parse: <line 1, column 12> Unexpected character ''
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:243)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:179)
at org.apache.pig.PigServer$Graph.validateQuery(PigServer.java:1648)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1621)
at org.apache.pig.PigServer.registerQuery(PigServer.java:575)
at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:1093)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:501)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69)
at org.apache.pig.Main.run(Main.java:541)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Can someone please help me with this? Is my command incorrect or any jar file is missing?
Thank you in advance!
It tells you exactly where the problem is: the ‘ should be replaced by ' which is not the same character.
Also, the space after the / seems fishy.
I am trying to run this commang over pig env.
grunt> A = LOAD inp;
But I am getting this error in the log files:
Pig Stack Trace:
ERROR 1200: mismatched input 'inp' expecting QUOTEDSTRING
Failed to parse: mismatched input 'inp' expecting QUOTEDSTRING
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:226)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:168)
at org.apache.pig.PigServer$Graph.validateQuery(PigServer.java:1565)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1538)
at org.apache.pig.PigServer.registerQuery(PigServer.java:540)
at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:970)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:386)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:189)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69)
at org.apache.pig.Main.run(Main.java:490)
at org.apache.pig.Main.main(Main.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
And in console Iam getting like this:
grunt> A = LOAD inp;
2012-10-26 12:18:34,627 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: mismatched input 'inp' expecting QUOTEDSTRING
Details at logfile: /usr/local/hadoop/pig_1351232517175.log
Can any body provide me appropriate solution for this?
The syntax for load has been used wrongly. Check out the correct example provided herewith.
http://pig.apache.org/docs/r0.7.0/piglatin_ref2.html#LOAD
Suppose we have a data file called myfile.txt. The fields are tab-delimited. The records are newline-separated.
1 2 3
4 2 1
8 3 4
In this example the default load function, PigStorage, loads data from myfile.txt to form relation A. The two LOAD statements are equivalent. Note that, because no schema is specified, the fields are not named and all fields default to type bytearray.
A = LOAD 'myfile.txt';
A = LOAD 'myfile.txt' USING PigStorage('\t');
DUMP A;
(1,2,3)
(4,2,1)
(8,3,4)
Example from http://pig.apache.org/docs
I believe the error log is self explanatory, it says - expecting QUOTEDSTRING
Please put the file name in single quotes to solve this issue.