What is the replacement of PDDocument.silentprint() in pdfbox version 2.0.0 and above? - pdfbox

I am switching to the pdfbox version 2.0.0 and wanted to know what is the replace for replacement of PDDocument.silentprint() in pdfbox version 2.0.0 and above?

The method PDDocument.silentprint() mentioned by the OP effectively did something like
PrinterJob job = PrinterJob.getPrinterJob();
job.setPageable(new PDPageable(this, job));
job.print();
According to the PDFBox 2.0 Migration Guide:
PDF Printing
With PDFBox 2.0.0 PDFPrinter has been removed.
Users of PDFPrinter.silentPrint() should now use this code:
PrinterJob job = PrinterJob.getPrinterJob();
job.setPageable(new PDFPageable(document));
job.print();
While users of PDFPrinter.print() should now use this code:
PrinterJob job = PrinterJob.getPrinterJob();
job.setPageable(new PDFPageable(document));
if (job.printDialog()) {
job.print();
}
Advanced use case examples can be found in th examples package under org/apache/pdfbox/examples/printing/Printing.java
Thus, for a PDDocument document the replacement for the 1.8.x
document.silentprint();
should be the 2.0.x
PrinterJob job = PrinterJob.getPrinterJob();
job.setPageable(new PDFPageable(document));
job.print();

Related

How can I parse info from kotlin docs to swagger-ui?

I need to parse kotlin docs (not swagger annotation) for swagger-ui.
I tried this, but it don't work.
Here my springdoc dependencies (springdocVersion = "1.6.6"). By the way, I can't use therapi version 0.13.0 if it's important.
runtimeOnly("org.springdoc:springdoc-openapi-kotlin:$springdocVersion")
implementation("org.springdoc:springdoc-openapi-ui:$springdocVersion")
implementation("org.springdoc:springdoc-openapi-webflux-ui:$springdocVersion")
implementation("org.springdoc:springdoc-openapi-javadoc:$springdocVersion")
annotationProcessor("com.github.therapi:therapi-runtime-javadoc-scribe:0.12.0")
implementation("com.github.therapi:therapi-runtime-javadoc:0.12.0")
After I replaced annotationProcessor("com.github.therapi:therapi-runtime-javadoc-scribe:0.12.0") with kapt("com.github.therapi:therapi-runtime-javadoc-scribe:0.12.0"), all worked well!
An example of the build file can be found here

How to use a compass lucene generated cfs index?

With (the latest) lucene 8.7 is it possible to open a .cfs compound index file generated by lucene 2.2 of around 2009, in a legacy application that I cannot modify, with lucene utility "Luke" ?
or alternatively could it be possibile to generate the .idx file for Luke from the .cfs ?
the .cfs was generated by compass on top of lucene 2.2, not by lucene directly
Is it possible to use a compass generated index containing :
_b.cfs
segments.gen
segments_d
possibly with solr ?
are there any examples how to open a file based .cfs index with compass anywhere ?
the conversion tool won't work because the index version is too old :
from lucene\build\demo :
java -cp ../core/lucene-core-8.7.0-SNAPSHOT.jar;../backward-codecs/lucene-backward-codecs-8.7.0-SNAPSHOT.jar org.apache.lucene.index.IndexUpgrader -verbose path_of_old_index
and the searchfiles demo :
java -classpath ../core/lucene-core-8.7.0-SNAPSHOT.jar;../queryparser/lucene-queryparser-8.7.0-SNAPSHOT.jar;./lucene-demo-8.7.0-SNAPSHOT.jar org.apache.lucene.demo.SearchFiles -index path_of_old_index
both fail with :
org.apache.lucene.index.IndexFormatTooOldException: Format version is not supported
This version of Lucene only supports indexes created with release 6.0 and later.
Is is possible to use an old index with lucene somehow ? how to use the old "codec" ?
also from lucene.net if possible ?
current lucene 8.7 yields an index containing these files :
segments_1
write.lock
_0.cfe
_0.cfs
_0.si
==========================================================================
update : amazingly it seems to open that very old format index with lucene.net v. 3.0.3 from nuget !
this seems to work in order to extract all terms from the index :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Globalization;
using Lucene.Net.Analysis.Standard;
using Lucene.Net.Documents;
using Lucene.Net.Index;
using Lucene.Net.QueryParsers;
using Lucene.Net.Search;
using Lucene.Net.Store;
using Version = Lucene.Net.Util.Version;
namespace ConsoleApplication1
{
class Program
{
static void Main()
{
var reader = IndexReader.Open(FSDirectory.Open("C:\\Temp\\ftsemib_opzioni\\v210126135604\\index\\search_0"), true);
Console.WriteLine("number of documents: "+reader.NumDocs() + "\n");
Console.ReadLine();
TermEnum terms = reader.Terms();
while (terms.Next())
{
Term term = terms.Term;
String termField = term.Field;
String termText = term.Text;
int frequency = reader.DocFreq(term);
Console.WriteLine(termField +" "+termText);
}
var fieldNames = reader.GetFieldNames(IndexReader.FieldOption.ALL);
int numFields = fieldNames.Count;
Console.WriteLine("number of fields: " + numFields + "\n");
for (IEnumerator<String> iter = fieldNames.GetEnumerator(); iter.MoveNext();)
{
String fieldName = iter.Current;
Console.WriteLine("field: " + fieldName);
}
reader.Close();
Console.ReadLine();
}
}
}
out of curiosity could it be possible to find out what index version it is ?
are there any examples of (old) compass with file system based index ?
Unfortunately you can't use an old Codec to access index files from Lucene 2.2. This is because codecs were introduced in Lucene 4.0. Prior to that the code for reading and writing files of the index was not grouped together into a codec but rather was just inherently part of the overall Lucene Library.
So in version of Lucene prior to 4.0 there is no codec, just file reading and writing code baked into the library. It would be very difficult to track down all that code and to create a codec that could be plugged into a modern version of Lucene. It's not an impossible task, but it require an Expert Lucene developer and a large amount of effort (ie an extremely expensive endeavor).
In light of all that, the answer to this SO question may be of some use: How to upgrade lucene files from 2.2 to 4.3.1
Update
Your best bet would be to use an old 3.x copy of java lucene or the Lucene.net ver 3.0.3 to open the index, then add and commit one doc (which will create a 2nd segment) and do a Optimize which will cause the two segments to be merged into one new segment. The new segment will be a version 3 segment. Then you can use Lucene.Net 4.8 Beta or a Java Lucene 4.X to do the same thing (but Commit was renamed ForceMerge starting in ver 4) again to convert the index to a 4.x index.
Then you can use the current java version of Lucene 8.x to do this once more to move the index all the way up to 8 since the current version of Java Lucene has codecs reaching all the way back to 5.0 see: https://github.com/apache/lucene-solr/tree/master/lucene/core/src/java/org/apache/lucene/codecs
However if you do receive the error again that you reported:
This version of Lucene only supports indexes created with release 6.0 and later.
then you will have to play this game one more cycle with a version 6.x Java Lucene to get from a 5.x index to a 6.x index. :-)

Will itext 2.1.7 version pdf support XMLWorker for converting the html to pdf?

I am using iText lowagie 2.1.7 version for generating the PDF from HTML file. I have used xmlworker:5.5.3 but not supported with lowagie 2.1.7 version. The error message shown is
No signature of method: com.itextpdf.tool.xml.XMLWorkerHelper.parseXHtml() is applicable for argument types: (com.lowagie.text.pdf.PdfWriter, com.lowagie.text.Document, java.io.InputStreamReader) values: [com.lowagie.text.pdf.PdfWriter#331801, com.lowagie.text.Document#1a6ce9c1, ...] Possible solutions: parseXHtml(com.itextpdf.tool.xml.ElementHandler, java.io.InputStream), parseXHtml(com.itextpdf.tool.xml.ElementHandler, java.io.Reader), parseXHtml(com.itextpdf.text.pdf.PdfWriter, com.itextpdf.text.Document, java.io.InputStream), parseXHtml(com.itextpdf.text.pdf.PdfWriter, com.itextpdf.text.Document, java.io.Reader)
What may be the solution for this. Or alternative to convert HTML to PDF using iText 2.1.7 version?
You can use HTMLWorker in iText 2.1.7, as an alternative:
import com.lowagie.text.Document;
import com.lowagie.text.DocumentException;
import com.lowagie.text.pdf.PdfWriter;
import com.lowagie.text.html.simpleparser.HTMLWorker;
import com.lowagie.text.PageSize;
...
Document document = new Document();
OutputStream file = new FileOutputStream("path/to/generatedfile.pdf");
PdfWriter writer = PdfWriter.getInstance(document, file);
document.open();
HTMLWorker htmlWorker = new HTMLWorker(document);
htmlWorker.parse(new StringReader("<html>...</html>"));
document.close();
I expect that it works for you.
XML Worker 5.5.4 (and previous versions) will only work with the corresponding iText version. There is no way to use XML Worker with iText 2.1.7, nor will there ever be a way to do this. It wouldn't be a good investment to create an XML Worker add-on for iText 2.1.7 as iText 2.1.7 should no longer be used in business. This is explained in the answer to this question: https://stackoverflow.com/questions/25696851/can-itext-2-1-7-or-earlier-can-be-used-commercially
That's a question that made it into the book "The Best iText Questions on StackOverflow" of which the first 17 pages were published a moment ago.

itextpdf-5.4.3 throws com.itextpdf.text.pdf.parser.InlineImageUtils$InlineImageParseException: EI not found after end of image data

I am receiving the EI not found error in this specific pdf found under https://bfs.ever-team.com/files/6fce4cef9769e40d1994e684a881d4bf/facture3_1.pdf.
I am using itextpdf-5.4.3 jar and below is the code:
com.itextpdf.awt.geom.Rectangle rec = new com.itextpdf.awt.geom.Rectangle(307, 728, 742, 400);
RenderFilter filter = new RegionTextRenderFilter(rec);
TextExtractionStrategy strategy = new FilteredTextRenderListener(new LocationTextExtractionStrategy(), filter);
String currentText = PdfTextExtractor.getTextFromPage(reader, i , strategy);
Method getTextFromPage is returning the error,
I checked other threads but it was mentioned that this error should be fixed in the latest jar, but it seems it is not facture3_1.pdfworking for my file.
Can anyone advise please.
A crosspost of this question has been answered on the iText mailing list. To close the question here, too, that answer is copied here:
The issue can be reproduced with iText 5.4.3 but not with the current development snapshot. The OP, therefore, should update his iText version.
InlineImageParseException: EI not found after end of image data
EI denotes the end of an inline image. The handling of inline images is tricky and not strictly well-defined. iText recently improved its handling of inline images to correctly parse more PDFs with such inline images.

Using Apache Sling's Scala 2.8 Script Engine

I've been trying to use Apache Sling's Scala 2.8 Script Engine recently updated last month. I came from using Scala 2.7 along with Sling's Scala 2.7 Script Engine and that worked great. I run into a problem when I try to use the new implementation. When calling ScalaScriptEngine's eval function I always receive an "Error executing script" due to a NullPointerException. Has anyone else worked with the new build of the script engine and run into this as well?
Thanks!
Steven
There is a bug which prevent the Scala scripting engine from being used standalone. See https://issues.apache.org/jira/browse/SLING-1877 for details and a patch.
Also note, that with the patch applied you still need to set the class path when using the scripting engine. This is a change from 2.7.7 where the default java class path (i.e. java.class.path) was used automatically. In 2.8 you have to set this explicitly thorough the '-usejavacp' argument.
Here is some sample code demonstrating the standalone usage of the Scala scripting engine:
def testScalaScriptEngine() {
val scriptEngineFactory = new ScalaScriptEngineFactory
val settings = new ScalaSettings()
settings.parse("-usejavacp")
scriptEngineFactory.getSettingsProvider.setScalaSettings(settings)
val scriptEngine = scriptEngineFactory.getScriptEngine
val script = """
package script {
class Demo(args: DemoArgs) {
println("Hello")
}
}
"""
scriptEngine.getContext.setAttribute("scala.script.class", "script.Demo", ScriptContext.ENGINE_SCOPE)
scriptEngine.eval(script)
}