Adjust figure size in Scilab with xs2pdf function - pdf

I have written lines of Scilab code which generate a matrix. It is a function whose argument is a vector containing two positive integers and that returns a matrix of size the values of the vector, according to some algorithm. The function also exports the matrix to a figure in LaTeX style, thanks to the prettyprint function.
I would like that figure to be exported to a PDF file, for which I used the function xs2pdf. It works almost fine. The problem is, when serving its intended purpose, the function generates a matrix of size around 40x40, and it never fits on the page. It seems to me like the PDF document created is not even A4.
I didn't include the entire code, all you need to know is that the code generates a matrix named z, and then I have the lines :
//just for this post
z=rand(40,40)
//export to figure
A=prettyprint(z) ;
clf ;
xstring(0,0,A) ;
//export to PDF
xs2pdf(0, '_path_to_pdf_file') ;
The matrix z is created here in order to simulate the matrix that my programme actually generates. If you run this code, having filled in the '_path_to_pdf_file' bit, do you get a decent PDF output?

I could reproduced the same problem. Sometimes the PDF output is not even generated, and Scilab returns an error.
One workaround is to make Scilab create a new TeX file and compile it with pdflatex outside Scilab. The good part is that you can run everything from the same Scilab script. Of course, you'll need a LaTeX distribution installed.
r = 40; c = 40;
z = rand(r,c);
A = prettyprint(z) ;
texfile = "\documentclass{standalone}" + ...
"\usepackage{graphics}" + ...
"\usepackage{amsmath}" + ...
"\setcounter{MaxMatrixCols}{"+ string(c) +"}" + ...
"\begin{document}" + ...
A + ...
"\end{document}"
filename = "matrix.tex";
write(filename,texfile) //write() cannot overwrite a file
dos("pdflatex " + filename) //use unix() instead of dos() in case you're not on Windows
I don't know if you have any knowledge of LaTeX, so I should make a few notes:
The output goes to current Scilab directory. All auxiliary files produced by LaTeX will also be created there.
It uses the standalone class, which crops the PDF output exactly to whatever is described in the .tex file. In this case, only the matrix is printed, with no margins. To use this class, you need the standalone package for LaTeX.
prettystring() outputs the matrix using pmatrix environment, which requires the amsmath package, thus you need this one installed too.
The line \setcounter{MaxMatrixCols}{c} is needed in case you have a matrix with more than 10 columns.
Here is the output:

Related

Crop PDF Content

I have a pdf that I would like to impose. It has 8.5x11" pages, media box, and crop box. I want the pdf to have 17x11" pages, by merging adjacent pages. Unfortunately, most pages have content either completely outside or straddling the crop box. Because each page can only have a single stream and crop box, when imposed, the overlapping content becomes visible. This is bad.
I don't want to rasterize my pdf because that would fix the DPI ahead-of-time. So I won't consider exporting pages as images, appending the images (imagemagick), then embedding these paired images into a new pdf.
I've also had problems imposing in postscript - issues with transparency, font rasterization, and other visual glitches during the pdf->ps->pdf conversions.
The answer should be scriptable.
So far I've tried:
podofo imposition scripts (lua)
PyPDF2 (python)
ghostscript
latex
The question "Ghostscript removes content outside the crop box?" suggests that ghostscript's pdfwrite module, when generating an output pdf file, will rasterize and crop content according to the crop box. So I'd only have to pipe my pdf through ghostscript's pdfwrite module. Unfortunately, this doesn't work.
I was about to give up when I tried printing the pdf to another pdf through evince. It works perfectly - text & vector elements within the crop box are not rasterized, and elements outside the crop box are removed (I haven't tested straddling elements yet). The quality is high - resolution (page size) and appearance are identical. In fact, everything seems to be the same except for the metadata.
So:
the question is possible
the answer already exists
How can I access it?
I think this functionality might be provided by cup's pdftopdf binary. I don't have any problems calling an external binary.... but can't figure out how to use pdftopdf.
Edit: Link to test pdf. It contains raster, vector, and text items - some partially occluded by partially transparent items - that span as well as abut adjacent pages. Once again, printing this PDF through cups appears to crop all content outside the crop box. However, opening the filtered pdf in inkscape shows that the off-page items are individually masked, not cropped - except text, which is trimmed.
The trick is to use Form XObjects to impose multiple pages within a single page. Form XObjects can reference entire PDF pages, and maintain independent clips. PyPDF2 doesn't support Form XObjects, so merging unifies the stream of all input pages such that they share the clip/media box of the output page. I've been successful in using both pdflatex and pdfrw (python) - test programs are inlined below. Since Form XObjects are derived from a similar postscript level 2 feature, as suggested by KenS it should be possible to achieve the same goal in ghostscript using "page clips". In fact he shared a ghostscript 2x1 imposition script in another answer, but it appears horrendously complicated. Combined with the font rasterization issues of poppler's pdftops (even with compatibility level > 1.4), I've abandoned the ghostscript approach.
Latex script derived from How to stitch two PDF pages together as one big page?. Requires pdflatex:
\documentclass{article}
\usepackage{pdfpages}
\usepackage[paperwidth=8.5in, paperheight=11in]{geometry}
\usepackage[multidot]{grffile}
\pagestyle{plain}
\begin{document}
\setlength\voffset{+0.0in}
\setlength\hoffset{+0.0in}
\includepdf[ noautoscale=true
, frame=false
, pages={1}
]
{<file.pdf>}
\eject \paperwidth=17in \pdfpagewidth=17in \paperheight=11in \pdfpageheight=11in
\includepdf[ nup=2x1
, noautoscale=true
, frame=false
, pages={2-,}
]
{<file.pdf>}
\end{document}
pdfrw (python script) derived from pdfrw:examples:booklet. Requires pdfrw >= 0.2:
#!/usr/bin/env python3
# Copyright:
# Yclept Nemo
# 2016
# License:
# GPLv3
import itertools
import argparse
import pdfrw
# from itertool recipes in the python documentation
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
def pagemerge(page, *pages):
merged = pdfrw.PageMerge() + page
for page in reversed(list(itertools.takewhile(lambda i: i is not None, reversed(pages)))):
merged = merged + page
merged[-1].x = merged[-2].x + merged[-2].w
return merged.render()
parser = argparse.ArgumentParser(description='Impose PDF files using Form XOBjects')
parser.add_argument\
( "source"
, help="PDF, source path"
, type=pdfrw.PdfReader
)
parser.add_argument\
( "-s", "--spacer"
, help="PDF, spacer path"
, type=lambda fp: next(iter(pdfrw.PdfReader(fp).pages), None)
)
parser.add_argument\
( "target"
, help="PDF, target path"
)
args = parser.parse_args()
pages = args.source.pages[:1]
for pair in grouper(args.source.pages[1:], 2):
assert pair[0] is not None
pages.append(pagemerge(pair[0], args.spacer, pair[1]))
# include metadata in target
target = pdfrw.PdfWriter()
target.addpages(pages)
target.trailer.Info = args.source.Info
target.write(args.target)
Some idiosyncrasies as of pdfrw 0.2:
Note that the operations +=, append and extend are not defined for pdfrw.PageMerge, even though it behaves like a list. Furthermore + acts like += in that it modifies the left-hand-side object.
Ghostscript and the pdfwrite device do not, in general, rasterise the content of input PDF files (the caveat is for cases involving transparent input and the output being < PDF 1.4).
Object which are entirely clipped out are not preserved into the output.
So the short answer is that this should be entirely feasible using Ghostscript and the pdfwrite device, with the advantage that its possible to impose the pages as well in a single operation. I do have an open bug report about clipping in a similar situation (reverse imposition) but have not yet had time to address it.
Note that Ghostscript normally uses the MediaBox for the clip region, if you want to use the CropBox then you need to add -dUseCropBox to the command line.

Convert PDF text into outlines?

Does anybody know a way to vectorize the text in a PDF document? That is, I want each letter to be a shape/outline, without any textual content. I'm using a Linux system, and open source or a non-Windows solution would be preferred.
The context: I'm trying to edit some old PDFs, for which I no longer have the fonts. I'd like to do it in Inkscape, but that will replace all the fonts with generic ones, and that's barely readable. I've also been converting back and forth using pdf2ps and ps2pdf, but the font info stays there. So when I load it into Inkscape, it still looks awful.
Any ideas? Thanks.
To achieve this, you will have to:
Split your PDF into individual pages;
Convert your PDF pages into SVG;
Edit the pages you want
Reassemble the pages
This answer will omit step 3, since that's not programmable.
Splitting the PDF
If you don't want a programmatic way to split documents, the modern way would be with using stapler. In your favorite shell:
stapler burst file.pdf
Would generate {file_1.pdf,...,file_N.pdf}, where 1...N are the PDF pages. Stapler itself uses PyPDF2 and the code for splitting a PDF file is not that complex. The following function splits a file and saves the individual pages in the current directory. (shamelessly copying from the commands.py file)
import math
import os
from PyPDF2 import PdfFileWriter, PdfFileReader
def split(filename):
with open(filename) as inputfp:
inputpdf = PdfFileReader(inputfp)
base, ext = os.path.splitext(os.path.basename(filename))
# Prefix the output template with zeros so that ordering is preserved
# (page 10 after page 09)
output_template = ''.join([
base,
'_',
'%0',
str(math.ceil(math.log10(inputpdf.getNumPages()))),
'd',
ext
])
for page in range(inputpdf.getNumPages()):
outputpdf = PdfFileWriter()
outputpdf.addPage(inputpdf.getPage(page))
outputname = output_template % (page + 1)
with open(outputname, 'wb') as fp:
outputpdf.write(fp)
Converting the individual pages to SVG
Now to convert the PDFs to editable files, I'd probably use pdf2svg.
pdf2svg input.pdf output.svg
If we take a look at the pdf2svg.c file, we can see that the code in principle is not that complex (assuming the input filename is in the filename variable and the output file name is in the outputname variable). A minimal working example in python follows. It requires the pycairo and pypoppler libraries:
import os
import cairo
import poppler
def convert(inputname, outputname):
# Convert the input file name to an URI to please poppler
uri = 'file://' + os.path.abspath(inputname)
pdffile = poppler.document_new_from_file(uri, None)
# We only have one page, since we split prior to converting. Get the page
page = pdffile.get_page(0)
# Get the page dimensions
width, height = page.get_size()
# Open the SVG file to write on
surface = cairo.SVGSurface(outputname, width, height)
context = cairo.Context(surface)
# Now we finally can render the PDF to SVG
page.render_for_printing(context)
context.show_page()
At this point you should have an SVG in which all text has been converted to paths, and will be able to edit with Inkscape without rendering issues.
Combining steps 1 and 2
You can call pdf2svg in a for loop to do that. But you would need to know the number of pages beforehand. The code below figures the number of pages and does the conversion in a single step. It requires only pycairo and pypoppler:
import os, math
import cairo
import poppler
def convert(inputname, base=None):
'''Converts a multi-page PDF to multiple SVG files.
:param inputname: Name of the PDF to be converted
:param base: Base name for the SVG files (optional)
'''
if base is None:
base, ext = os.path.splitext(os.path.basename(inputname))
# Convert the input file name to an URI to please poppler
uri = 'file://' + os.path.abspath(inputname)
pdffile = poppler.document_new_from_file(uri, None)
pages = pdffile.get_n_pages()
# Prefix the output template with zeros so that ordering is preserved
# (page 10 after page 09)
output_template = ''.join([
base,
'_',
'%0',
str(math.ceil(math.log10(pages))),
'd',
'.svg'
])
# Iterate over all pages
for nthpage in range(pages):
page = pdffile.get_page(nthpage)
# Output file name based on template
outputname = output_template % (nthpage + 1)
# Get the page dimensions
width, height = page.get_size()
# Open the SVG file to write on
surface = cairo.SVGSurface(outputname, width, height)
context = cairo.Context(surface)
# Now we finally can render the PDF to SVG
page.render_for_printing(context)
context.show_page()
# Free some memory
surface.finish()
Assembling the SVGs into a single PDF
To reassemble you can use the pair inkscape / stapler to convert the files manually. But it is not hard to write code that does this. The code below uses rsvg and cairo. To convert from SVG and merge everything into a single PDF:
import rsvg
import cairo
def convert_merge(inputfiles, outputname):
# We have to create a PDF surface and inform a size. The size is
# irrelevant, though, as we will define the sizes of each page
# individually.
outputsurface = cairo.PDFSurface(outputname, 1, 1)
outputcontext = cairo.Context(outputsurface)
for inputfile in inputfiles:
# Open the SVG
svg = rsvg.Handle(file=inputfile)
# Set the size of the page itself
outputsurface.set_size(svg.props.width, svg.props.height)
# Draw on the PDF
svg.render_cairo(outputcontext)
# Finish the page and start a new one
outputcontext.show_page()
# Free some memory
outputsurface.finish()
PS: It should be possible to use the command pdftocairo, but it doesn't seem to call render_for_printing(), which makes the output SVG maintain the font information.
I'm afraid to vectorize the PDFs you would still need the original fonts (or a lot of work).
Some possibilities that come to mind:
dump the uncompressed PDF with pdftk and discover what the font names are, then look for them on FontMonster or other font service.
use some online font recognition service to get a close match with your font, in order to preserve kerning (I guess kerning and alignment are what's making your text unreadable)
try replacing the fonts manually (again pdftk to convert the PDF to a PDF which is editable with sed. This editing will break the PDF, but pdftk will then be able to recompress the damaged PDF to a useable one).
Here's what you really want - font substitution. You want some code/app to be able to go through the file and make appropriate changes to the embedded fonts.
This task is doable and is anywhere from easy to non-trivial. It's easy when you have a font that matches the metrics of the font in the file and the encoding used for the font is sane. You could probably do this with iText or DotPdf (the latter is not free beyond the evaluation, and is my company's product). If you modified pdf2ps, you could probably manage changing the fonts on the way through too.
If the fonts used in the file are font subsets that have creative reencoding, then you are in hell and will likely have all manner of pain doing the change. Here's why:
PostScript was designed at a point when there was no Unicode. Adobe used a single byte for characters and whenever you rendered any string, the glyph to draw was taken from a 256 entry table called the encoding vector. If a standard encoding didn't have what you wanted, you were encouraged to make fonts on the fly based on the standard font that differed only in encoding.
When Adobe created Acrobat, they wanted to make transition from PostScript as easy as possible so that font mechanism was modeled. When the ability to embed fonts into PDFs was added, it was clear that this would bloat the files, so PDF also included the ability to have font subsets. Font subsets are made by taking an existing font and removing all the glyphs that won't be used and re-encoding it into the PDF. The may be no standard relationship between the encoding vector and the code points in the file - all those may be changed. Instead, there may be an embedded PostScript function /ToUnicode which will translate encoded characters to a Unicode representation.
So yeah, non-trivial.
For the folks who come after me:
The best solutions I found were to use Evince to print as SVG, or to use the pdf2svg program that's accessible via Synaptic on Mint. However, Inkscape wasn't able to cope with the resulting SVGs--it entered an infinite loop with the error message:
File display/nr-arena-item.cpp line 323 (?): Assertion item->state & NR_ARENA_ITEM_STATE_BBOX failed
I'm giving up this quest for now, but maybe I'll try again in a year or two. In the meantime, maybe one of these solutions will work for you.

Ansys multiphysics: blank output file

I have a model of a heating process on Ansys Multiphysics, V11.
After running the simulation, I have a script to plot a temperature profile:
!---------------- POST PROCESSING -----------------------
/post1 ! tdatabase postprocessor
!---define profile temperature
path,s_temp1,2,,100 ! define a path
ppath,1,,dop/2,0,0 ! create a path point
ppath,2,,dop/2,1.5,0 ! create a path point
PDEF,surf_t1,TEMP, ,noav ! print a path
plpath,surf_t1 ! plot a path
What I now need, is to save the resulting path in a text file. I have already looked online for a solution, and found the following code to do it, which I appended after the lines above:
/OUTPUT,filename,extension
PRPATH,surf_t1
/OUTPUT
Ansys generates the file filename.extension but it is empty. I tried to place the OUTPUT command in a few locations in the script, but without any success.
I suspect I need to define something else, but I have no idea where to look, as Ansys documentation online is terribly chaotic, and all internet pages I've opened before writing this question are not better.
A final note: Ansys V11 is an old version of the software, but I don't want to upgrade it and fit the old model to the new software.
For the output of the simulation (which includes all calculation steps, and sub-steps description and node-by-node results) the output must be declared in the beginning of the code, and not in the postprocessing phase.
Declaring
/OUTPUT,filename,extension
in the preamble of the main script makes such that the output is stored in the right location, with the desired extension. At the end of the scripts, you must then declare
/OUTPUT
to reset the output file location for ANSYS.
The output to the PATH call made in the postprocessing script is however not printed in the file.
It is convenient to use
*CFOPEN,file,ext
*VWRITE,Vector(1,1).Vector(1,2)
(2F12.6)
*CFCLOSE
where Vector(1,1) is a two column array created by *DIM, and stores your data to output to file
As this is a special command, run it from file i.e. macro_output.mac

Display variables using CBC MPS input in NEOS

Am trying to use NEOS to solve a linear program using MPS input.
The MPS file is fine, but apparently you need a "paramaters file" as well to tell the solver what to do (min/max etc.). However I can't find any information on this online anywhere.
So far I have got NEOS to solve a maximization problem and display the objective function. However I cannot get it to display the variables.
Does anyone know what code I should add to the paramters file to tell NEOS/CBC to display the resulting variables?
The parameter file consists of a list of Cbc (standalone) commands in a file (one per line). The format of the commands is (quoting the documentation):
One command per line (and no -)
abcd? gives list of possibilities, if only one + explanation
abcd?? adds explanation, if only one fuller help(LATER)
abcd without value (where expected) gives current value
abcd value or abcd = value sets value
The commands are the following:
? dualT(olerance) primalT(olerance) inf(easibilityWeight)
integerT(olerance) inc(rement) allow(ableGap) ratio(Gap)
fix(OnDj) tighten(Factor) log(Level) slog(Level)
maxN(odes) strong(Branching) direction error(sAllowed)
gomory(Cuts) probing(Cuts) knapsack(Cuts) oddhole(Cuts)
clique(Cuts) round(ingHeuristic) cost(Strategy) keepN(ames)
scaling directory solver import
export save(Model) restore(Model) presolve
initialS(olve) branch(AndBound) sol(ution) max(imize)
min(imize) time(Limit) exit stop
quit - stdin unitTest
miplib ver(sion)
To see the solution values, you should include the line sol - after the min or max line of your parameter file.
If this doesn't work you can submit the problem to NEOS in AMPL format via this page. In addition to model and data files, it accepts a commands file where you can use statements to solve the problem and display the solution, for example:
solve;
display _varname, _var;
This post describes how to convert MPS to AMPL.

Can Mathematica create multi-page PDF files?

When one imports a multi-page pdf file (the file I have in mind contains images of artwork, one per page) into Mathematica 8.0.1 by
book = Import["simple.pdf"]
Mathematica returns a list of graphics objects, one for each page. I have some manipulations I perform on each page, and then want to save the changed pages back into a single PDF file
Export["DistortedSimple.pdf", distortedbook]
the resulting file has all of the images on a single page. Is there a convenient way to export a list of images to PDF, one per page?
It doesn't seem to be possible with Export, no matter how much I play with the Pages element (apart from the notebook-based solutions given by others).
An alternative is to install pdftk (a relatively small command line tool that we'll use to assemble the pages), and use the following Mathematica function:
exportMultipagePDF[name_String, g_List, options___] :=
Module[
{fileNames, quote},
quote[s_] := "\"" <> s <> "\"";
fileNames =
Table[
FileNameJoin[{$TemporaryDirectory, "mmapage" <> IntegerString[i] <> ".pdf"}],
{i, Length[g]}
];
Check[
Export[#1, #2, "PDF", options] & ### Thread[{fileNames, g}],
Return[$Failed]
];
If[
Run["pdftk", Sequence ## (quote /# fileNames), "cat output", name] =!= 0,
Return[$Failed]
];
DeleteFile /# fileNames;
]
On Windows I needed to quote the file names before passing them to PDFtk. I don't know about other platforms, hopefully it won't cause any trouble.
Try it with
exportMultipagePDF["test.pdf", Table[Graphics[{Hue[x], Disk[]}], {x, 0, 1, .2}]]
(Hi Kevin!)
I just evaluated:
Print[ExampleData[#]] & /# Take[ExampleData["TestImage"], 6]
Export["Desktop/Kevin.pdf", EvaluationNotebook[]]
using V8.0.1 for OS X, and the resulting PDF was split into four pages. So I think you best approach is to (programmatically) create a notebook of your modified images, and export that notebook.
Try saving the notebook as PDF rather than Exporting the set of cells as a PDF.
EDIT:
To ensure you have your page breaks where you want, set Screen Environment to Printing (you can do this via a menu command or programmatically), and insert page breaks using the relevant menu command. This guide page might be helpful.
From your comment, it sounds like you need to set the ImageSize option for the transformed image to ensure it is the size you want when displaying onscreen.