I am looking for a script that would open a given number of images with different aspect ratios and layout them all in a single document like the flickr gallery. Something as seen in this page: http://martin-oehm.de/data/layout.html
Is there any script/plugin out there that can do this? The purpose is just to create a reference file with all the images instead of having several images floating around.
Thank you
The fact that you have had no answers in 10 weeks should tell you that Photoshop is maybe not the best/easiest place to do this.... so, I made the following script that does it pretty well outside of Photoshop.
It assumes you have OSX or Linux to run a bash script and it uses ImageMagick to actually do the image processing.
#!/bin/bash
################################################################################
# layout
# Mark Setchell
#
# Layout all images in current directory onto contact sheet. Algorithm is crude
# but fairly effective as follows:
#
# Create temporary workspace
# Resize all images to standard HEIGHT into workspace saving names & new widths
# row=0
# Repeat till there are no images
# Repeat till row is full
# Append widest image that will fit to this row
# End repeat
# Append this row to output contact sheet
# row++
# End repeat
################################################################################
WORKSPACE=layout-$$.d
HEIGHT=300
WIDTH=2000
PAD=50
shopt -s nullglob
shopt -s nocaseglob
# Declare our arrays
declare -a names
declare -a width
declare -a indices
################################################################################
# Returns the number of images remaining still to be laid out
################################################################################
NumRemaining(){
local result=0
local z
for ((z=0;z<${#width[#]};z++)) do
if [ ${width[z]} -gt 0 ]; then
((result++))
fi
done
echo $result
}
################################################################################
# Returns index of widest image that fits in specified width
################################################################################
BestFitting(){
local limit=$1
local index=-1
local widest=0
local z
local t
for ((z=0;z<${#width[#]};z++)) do
t=${width[z]}
if [[ $t -gt 0 && $t -le $limit && $t -gt $widest ]]; then
widest=$t
index=$z
fi
done
echo $index
}
mkdir $WORKSPACE 2> /dev/null
n=0
for f in *.jpg *.png *.gif *.psd; do
# Save name
names[n]=$f
# Extract file extension and basic name, because we want to add "[0]" to end of PSD files
ext="${f##*.}"
base="${f%.*}"
echo $ext | tr "[A-Z]" "[a-z]" | grep -q "psd$"
if [ $? -eq 0 ]; then
convert "$f[0]" -resize x${HEIGHT} $WORKSPACE/${n}.jpg
else
convert "$f" -resize x${HEIGHT} $WORKSPACE/${n}.jpg
fi
# Get width of the resized file and save
width[n]=$(identify -format "%w" $WORKSPACE/${n}.jpg)
echo DEBUG: Index: $n is file: $f thumbnailed to width: ${width[n]}
((n++))
done
echo DEBUG: All images added
echo
cd "$WORKSPACE"
row=0
while [ $(NumRemaining) -gt 0 ]; do
echo DEBUG: Processing row $row, images left $(NumRemaining)
remaining=$WIDTH # new row, we have the full width to play with
cumwidth=0 # cumulative width of images in this row
i=0 # clear array of image indices in this row
while [ $remaining -gt 0 ]; do
best=$(BestFitting $remaining)
if [ $best -lt 0 ]; then break; fi
indices[i]=$best # add this image's index to the indices of images in this row
w=${width[best]}
((cumwidth=cumwidth+w))
((remaining=WIDTH-cumwidth-i*PAD)) # decrease remaining space on this line
width[best]=-1 # effectively remove image from list
((i++))
echo DEBUG: Adding index: $best, width=$w, cumwidth=$cumwidth, remaining=$remaining
done
if [ $i -lt 1 ]; then break; fi # break if no images in this row
PADWIDTH=$PAD
if [ $i -gt 1 ]; then ((PADWIDTH=(WIDTH-cumwidth)/(i-1))); fi
echo $(NumRemaining)
echo $i
if [ $(NumRemaining) -eq 0 ]; then PADWIDTH=$PAD; fi # don't stretch last row
echo DEBUG: Padding between images: $PADWIDTH
ROWFILE="row-${row}.png"
THIS="${indices[0]}.jpg"
# Start row with left pad
convert -size ${PAD}x${HEIGHT}! xc:white $ROWFILE
for ((z=0;z<$i;z++)); do
if [ $z -gt 0 ]; then
# Add pad to right before appending image
convert $ROWFILE -size ${PADWIDTH}x${HEIGHT}! xc:white +append $ROWFILE
fi
THIS="${indices[z]}.jpg"
convert $ROWFILE $THIS +append $ROWFILE
done
# End row with right pad
convert $ROWFILE -size ${PAD}x${HEIGHT}! xc:white +append $ROWFILE
((row++))
echo DEBUG: End of row
echo
done
# Having generated all rows, append them all together one below the next
((tmp=WIDTH+2*PAD))
convert -size ${tmp}x${PAD} xc:white result.png
for r in row*.png; do
convert result.png $r -size ${tmp}x${PAD}! xc:white -append result.png
done
open result.png
Depending on the files in your input directory (obviously), it produces output like this:
with debug information in your Terminal window as it goes like this:
DEBUG: Index: 0 is file: 1.png thumbnailed to width: 800
DEBUG: Index: 1 is file: 10.png thumbnailed to width: 236
DEBUG: Index: 2 is file: 11.png thumbnailed to width: 236
DEBUG: Index: 3 is file: 12.png thumbnailed to width: 360
DEBUG: Index: 4 is file: 2.png thumbnailed to width: 480
DEBUG: Index: 5 is file: 3.png thumbnailed to width: 240
DEBUG: Index: 6 is file: 4.png thumbnailed to width: 218
DEBUG: Index: 7 is file: 5.png thumbnailed to width: 375
DEBUG: Index: 8 is file: 6.png thumbnailed to width: 1125
DEBUG: Index: 9 is file: 7.png thumbnailed to width: 1226
DEBUG: Index: 10 is file: 8.png thumbnailed to width: 450
DEBUG: Index: 11 is file: 9.png thumbnailed to width: 300
DEBUG: Index: 12 is file: a.png thumbnailed to width: 400
DEBUG: All images added
DEBUG: Processing row 0, images left 13
DEBUG: Adding index: 9, width=1226, cumwidth=1226, remaining=774
DEBUG: Adding index: 4, width=480, cumwidth=1706, remaining=244
DEBUG: Adding index: 5, width=240, cumwidth=1946, remaining=-46
DEBUG: Padding between images: 27
DEBUG: End of row
DEBUG: Processing row 1, images left 10
DEBUG: Adding index: 8, width=1125, cumwidth=1125, remaining=875
DEBUG: Adding index: 0, width=800, cumwidth=1925, remaining=25
DEBUG: Padding between images: 75
DEBUG: End of row
DEBUG: Processing row 2, images left 8
DEBUG: Adding index: 10, width=450, cumwidth=450, remaining=1550
DEBUG: Adding index: 12, width=400, cumwidth=850, remaining=1100
DEBUG: Adding index: 7, width=375, cumwidth=1225, remaining=675
DEBUG: Adding index: 3, width=360, cumwidth=1585, remaining=265
DEBUG: Adding index: 1, width=236, cumwidth=1821, remaining=-21
DEBUG: Padding between images: 44
DEBUG: End of row
DEBUG: Processing row 3, images left 3
DEBUG: Adding index: 11, width=300, cumwidth=300, remaining=1700
DEBUG: Adding index: 2, width=236, cumwidth=536, remaining=1414
DEBUG: Adding index: 6, width=218, cumwidth=754, remaining=1146
DEBUG: Padding between images: 623
DEBUG: End of row
Upon reflection, the output could maybe be improved by reversing odd-numbered rows so that the largest image on each line is on the left one time and on the right the next time - it is a simple change but I don't want to over-complicate the code. Basically, you would reverse the order of the array indices[] every second time you fall out of the row-assembling loop.
Here are some maybe useful links:
Google+ algorithm
flickr algorithm in jQuery
I presume Photoshop's built-in File->Automate->Contact Sheet II is inadequate for your purposes...
Related
I'm trying to learn nextflow but it's not going very well.
I used NGS-based double-end sequencing data to build an analysis flow from fastq files to vcf files using Nextflow. However I got stuck right at the beginning, as shown in the code. The first process soapnuke works fine, but when passing the files from the channel (clean_fq1 \ clean_fq2) to the next process there is an ERROR: No such variable: from. As shown in the figure below. What should I do? Thanks for a help.
enter image description here
params.fq1 = "/data/mPCR/220213_I7_V350055104_L3_SZPVL22000812-81/*1.fq.gz"
params.fq2 = "/data/mPCR/220213_I7_V350055104_L3_SZPVL22000812-81/*2.fq.gz"
params.index = "/home/duxu/project/data/index.list"
params.primer = “/home/duxu/project/data/primer_*.fasta"
params.output='results'
fq1 = Channel.frompath(params.fq1)
fq2 = Channel.frompath(params.fq2)
index = Channel.frompath(params.index)
primer = Channel.frompath(params.primer)
process soapnuke{
conda'soapnuke'
tag{"soapnuk ${fq1} ${fq2}"}
publishDir "${params.outdir}/SOAPnuke", mode: 'copy'
input:
file rawfq1 from fq1
file rawfq2 from fq2
output:
file 'clean1.fastq.gz' into clean_fq1
file 'clean2.fastq.gz' into clean_fq2
script:
"""
SOAPnuke filter -1 $rawfq1 -2 $rawfq2 -l 12 -q 0.5 -Q 2 -o . \
-C clean1.fastq.gz -D clean2.fastq.gz
"""
}
I get stuck on this:
process barcode_splitter{
conda'barcode_splitter'
tag{"barcode_splitter ${fq1} ${fq2}"}
publishDir "${params.outdir}/barcode_splitter", mode: 'copy'
input:
file split1 from clean_fq1
file split2 from clean_fq2
index from params.index
output:
file '*-read-1.fastq.gz' into trimmed_index1
file '*-read-2.fastq.gz' into trimmed_index2
script:
"""
barcode_splitter --bcfile $index $split1 $split2 --idxread 1 2 --mismatches 1 --suffix .fastq --gzipout
"""
}
The code below will produce the error you see:
index = Channel.fromPath( params.index )
process barcode_splitter {
...
input:
index from params.index
...
}
What you want is:
index = file( params.index )
process barcode_splitter {
...
input:
path index
...
}
Note that when the file input name is the same as the channel name, the from channel declaration can be omitted. I also used the path qualifier above, as it should be preferred over the file qualifier when using Nextflow 19.10.0 or later.
You may also want to consider refactoring to use the fromFilePairs factory method. Here's one way, untested of course:
params.reads = "/data/mPCR/220213_I7_V350055104_L3_SZPVL22000812-81/*_{1,2}.fq.gz"
params.index = "/home/duxu/project/data/index.list"
params.output = 'results'
reads_ch = Channel.fromFilePairs( params.reads )
index = file( params.index )
process soapnuke {
tag { sample }
publishDir "${params.outdir}/SOAPnuke", mode: 'copy'
conda 'soapnuke'
input:
tuple val(sample), path(reads) from reads_ch
output:
tuple val(sample), path('clean{1,2}.fastq.gz') into clean_reads_ch
script:
def (rawfq1, rawfq2) = reads
"""
SOAPnuke filter \\
-1 "${rawfq1}" \\
-2 "${rawfq2}" \\
-l 12 \\
-q 0.5 \\
-Q 2 \\
-o . \\
-C "clean1.fastq.gz" \\
-D "clean2.fastq.gz"
"""
}
process barcode_splitter {
tag { sample }
publishDir "${params.outdir}/barcode_splitter", mode: 'copy'
conda 'barcode_splitter'
input:
tuple val(sample), path(reads) from clean_reads_ch
path index
output:
tuple val(sample), path('*-read-{1,2}.fastq.gz') into trimmed_index
script:
def (splitfq1, splitfq2) = reads
"""
barcode_splitter \\
--bcfile \\
"${index}" \\
"${split1}" \\
"${split2}" \\
--idxread 1 2 \\
--mismatches 1 \\
--suffix ".fastq" \\
--gzipout
"""
}
Can anyone suggest how to extend page size on one side using postscript? I have to put a mark on a heavy bunch of documents. I use postscript for this (as technology that most native to pdfs, obviously as speed is critical for the task).
I am able to put a mark on the document itself, but I have to add a blank field to the right of each page and this is the problem.
This is what is going on (the mark [copy] is added above the pdf)
This is what has to be (page size extended with blank field on the right):
This is the content of my mark.ps file
<<
/EndPage
{
2 eq { pop false }
{
gsave
7.0 setlinewidth
70 780 newpath moveto
70 900 lineto
120 900 lineto
120 780 lineto
66.5 780 lineto
stroke
closepath
/Helvetica findfont 18 scalefont setfont
newpath
100 792.5 moveto
90 rotate
(COPY) true charpath fill
1 setlinewidth stroke
0 setgray
grestore
true
} ifelse
} bind
>> setpagedevice
This is how it is applied to pdf
gs \
-q \
-dBATCH \
-dNOPAUSE \
-sDEVICE=pdfwrite \
-sOutputFile=output.pdf \
-f mark.ps \
input.pdf
What I tried :
Change page size << /PageSize [595 1642] >> setpagedevice - didnt work
use -g option in gs like this: -g595x1642 - also didnt work fine
If someone has relevant suggestions please share!
This question is possibly related to storing and retrieving a numpy array in the form of an image. So, I am saving an array of binary values to an image (using scipy.misc.toimage feature):
import numpy, random, scipy.misc
data = numpy.array([random.randint(0, 1) for i in range(100)]).reshape(100, 1).astype("b")
image = scipy.misc.toimage(data, cmin=0, cmax=1, mode='1')
image.save("arrayimage.png")
Notice that I am saving the data with mode 1 (1-bit pixels, black and white, stored with one pixel per byte). Now, when I try to read it back like:
data = scipy.misc.imread("arrayimage.png")
the resulting data array comes back as all zeroes.
The question is: is there any other way to retrieve data from the image, with the strict requirement that the image should be created with the mode 1. Thanks.
I think you want this:
from PIL import Image
import numpy
# Generate boolean data
data=numpy.random.randint(0, 2, size=(100, 1),dtype="bool")
# Convert to PIL image and save as PNG
Image.fromarray(data).convert("1").save("arrayimage.png")
Checking what you get with ImageMagick
identify -verbose arrayimage.png
Sample Output
Image: arrayimage.png
Format: PNG (Portable Network Graphics)
Mime type: image/png
Class: PseudoClass
Geometry: 1x100+0+0
Units: Undefined
Colorspace: Gray
Type: Bilevel <--- Bilevel means boolean
Base type: Undefined
Endianess: Undefined
Depth: 8/1-bit
Channel depth:
Gray: 1-bit
Channel statistics:
Pixels: 100
Gray:
min: 0 (0)
max: 255 (1)
mean: 130.05 (0.51)
standard deviation: 128.117 (0.502418)
kurtosis: -2.01833
skewness: -0.0394094
entropy: 0.999711
Colors: 2
Histogram:
49: ( 0, 0, 0) #000000 gray(0) <--- half the pixels are black
51: (255,255,255) #FFFFFF gray(255) <--- half are white
Colormap entries: 2
Colormap:
0: ( 0, 0, 0,255) #000000FF graya(0,1)
1: (255,255,255,255) #FFFFFFFF graya(255,1)
I am creating a PDF file from a TIFF image using ImageMagick and Ghostscript.
My source tiff is 16 bits per channel with no alpha (48 bit image) with an attached ICC Profile (AdobeRGB) and I want to maintaing this in the final PDF.
convert input.tif[0] -density 600 -alpha Off -size 5809x9408 -depth 16 intermediate.ps
This takes my input tiff image (just the main image, and not the thumbnail via using [0]) and creates a .ps file from the bitmap.
When I look at the size of the PostScript file, it's roughly the same size (3-4 MB larger than the 328MB tiff) as the source TIFF, but I can't tell if the image data in the .ps is 8 or 16 bit per channel.
Then, when I use GhostScript to convert this to a PDF, I'm getting 8 bits per channel in the PDF.
gs -dPDFA=1 -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sDefaultRGBProfile=AdobeRGB1998.icc -dOverrideICC -sOutputFile=output.pdf -r600 -P PDFA_def.ps -f custom.joboptions intermediate.ps
If I use pdfimages to inspect the PDF, it shows me 8 bit per channel.
pdfimages -list output.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 5809 9408 rgb 3 8 image no 10 0 600 600 74.1M 47%
The contents of my PDFA_def.ps has been modified from the default Ghostscript install to specify AdobeRGB (1998) as the colour profile:
%!
% This is a sample prefix file for creating a PDF/A document.
% Feel free to modify entries marked with "Customize".
% This assumes an ICC profile to reside in the file (ISO Coated sb.icc),
% unless the user modifies the corresponding line below.
% Define entries in the document Info dictionary :
/ICCProfile (AdobeRGB1998.icc) % Customise
def
[ /Title (Title) % Customise
/DOCINFO pdfmark
% Define an ICC profile :
[/_objdef {icc_PDFA} /type /stream /OBJ pdfmark
[{icc_PDFA}
<<
/N currentpagedevice /ProcessColorModel known {
currentpagedevice /ProcessColorModel get dup /DeviceGray eq
{pop 1} {
/DeviceRGB eq
{3}{4} ifelse
} ifelse
} {
(ERROR, unable to determine ProcessColorModel) == flush
} ifelse
>> /PUT pdfmark
[{icc_PDFA} ICCProfile (r) file /PUT pdfmark
% Define the output intent dictionary :
[/_objdef {OutputIntent_PDFA} /type /dict /OBJ pdfmark
[{OutputIntent_PDFA} <<
/Type /OutputIntent % Must be so (the standard requires).
/S /GTS_PDFA1 % Must be so (the standard requires).
/DestOutputProfile {icc_PDFA} % Must be so (see above).
/OutputConditionIdentifier (sRGB) % Customize
>> /PUT pdfmark
[{Catalog} <</OutputIntents [ {OutputIntent_PDFA} ]>> /PUT pdfmark
I've also got a custom.joboptions file that I created in Acrobat Distiller and then have modified for PDF/A compliance - I have tried to force 16-bit images in this file too, but I'm still getting 8-bit images in the PDF.
I don't know how many of these options Ghostscript respects and how many it ignores however. If I don't use this custom.joboptions file when making the PDF, the images are downsampled to a very low resolution.
<<
/ASCII85EncodePages false
/AllowTransparency false
/AutoPositionEPSFiles true
/AutoRotatePages /All
/Binding /Left
/CalGrayProfile (Dot Gain 20%)
/CalRGBProfile (sRGB IEC61966-2.1)
/CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2)
/sRGBProfile (sRGB IEC61966-2.1)
/CannotEmbedFontPolicy /Error
/CompatibilityLevel 1.4
/CompressObjects /Off
/CompressPages true
/ConvertImagesToIndexed true
/PassThroughJPEGImages true
/CreateJobTicket false
/DefaultRenderingIntent /Default
/DetectBlends true
/DetectCurves 0.0000
/ColorConversionStrategy /LeaveColorUnchanged
/DoThumbnails false
/EmbedAllFonts true
/EmbedOpenType false
/ParseICCProfilesInComments true
/EmbedJobOptions false
/DSCReportingLevel 0
/EmitDSCWarnings false
/EndPage -1
/ImageMemory 1048576
/LockDistillerParams true
/MaxSubsetPct 100
/Optimize false
/OPM 1
/ParseDSCComments true
/ParseDSCCommentsForDocInfo true
/PreserveCopyPage true
/PreserveDICMYKValues true
/PreserveEPSInfo true
/PreserveFlatness true
/PreserveHalftoneInfo false
/PreserveOPIComments false
/PreserveOverprintSettings false
/StartPage 1
/SubsetFonts false
/TransferFunctionInfo /Apply
/UCRandBGInfo /Remove
/UsePrologue false
/ColorSettingsFile (None)
/AlwaysEmbed [ true
]
/NeverEmbed [ true
]
/AntiAliasColorImages false
/CropColorImages true
/ColorImageMinResolution 600
/ColorImageMinResolutionPolicy /OK
/DownsampleColorImages false
/ColorImageDownsampleType /Average
/ColorImageResolution 600
/ColorImageDepth -1
/ColorImageMinDownsampleDepth 16
/ColorImageDownsampleThreshold 1.50000
/EncodeColorImages true
/ColorImageFilter /FlateEncode
/AutoFilterColorImages false
/ColorImageAutoFilterStrategy /JPEG
/ColorACSImageDict <<
/QFactor 0.15
/HSamples [1 1 1 1] /VSamples [1 1 1 1]
>>
/ColorImageDict <<
/QFactor 0.15
/HSamples [1 1 1 1] /VSamples [1 1 1 1]
>>
/JPEG2000ColorACSImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 30
>>
/JPEG2000ColorImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 30
>>
/AntiAliasGrayImages false
/CropGrayImages true
/GrayImageMinResolution 300
/GrayImageMinResolutionPolicy /OK
/DownsampleGrayImages false
/GrayImageDownsampleType /Average
/GrayImageResolution 600
/GrayImageDepth -1
/GrayImageMinDownsampleDepth 2
/GrayImageDownsampleThreshold 1.50000
/EncodeGrayImages true
/GrayImageFilter /FlateEncode
/AutoFilterGrayImages false
/GrayImageAutoFilterStrategy /JPEG
/GrayACSImageDict <<
/QFactor 0.15
/HSamples [1 1 1 1] /VSamples [1 1 1 1]
>>
/GrayImageDict <<
/QFactor 0.15
/HSamples [1 1 1 1] /VSamples [1 1 1 1]
>>
/JPEG2000GrayACSImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 30
>>
/JPEG2000GrayImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 30
>>
/AntiAliasMonoImages false
/CropMonoImages true
/MonoImageMinResolution 1200
/MonoImageMinResolutionPolicy /OK
/DownsampleMonoImages false
/MonoImageDownsampleType /Average
/MonoImageResolution 2400
/MonoImageDepth -1
/MonoImageDownsampleThreshold 1.50000
/EncodeMonoImages true
/MonoImageFilter /CCITTFaxEncode
/MonoImageDict <<
/K -1
>>
/AllowPSXObjects false
/CheckCompliance [
/PDFA1B:2005
]
/PDFX1aCheck false
/PDFX3Check false
/PDFXCompliantPDFOnly true
/PDFXNoTrimBoxError false
/PDFXTrimBoxToMediaBoxOffset [
0.00000
0.00000
0.00000
0.00000
]
/PDFXSetBleedBoxToMediaBox true
/PDFXBleedBoxToTrimBoxOffset [
0.00000
0.00000
0.00000
0.00000
]
/PDFXOutputIntentProfile (Adobe RGB \0501998\051)
/PDFXOutputConditionIdentifier ()
/PDFXOutputCondition ()
/PDFXRegistryName ()
/PDFXTrapped /False
/CreateJDFFile false
>> setdistillerparams
<<
/HWResolution [600 600]
/PageSize [697.080 1128.960]
>> setpagedevice
PostScript can't handle 16 bits per component, it only handles 1, 2, 4, 8 and 12.
PDF doesn't support 12 BPC, only 1, 2, 4, 8 and 16.
So there isn't any way to get a PDF file with more than 12 BPC if you use PostScript as an intermediate format. Even if the PDF file says its 16 BPC the actual data will be limited to 12 (16BPC original -> 12 BPC PostScript -> 16 BPC PDF)
Now further to that, you say that you are creating a PDF/A file, and its PDF/A-1. If you read the PDF/A-1 specification you will see that its limited to PDF 1.4, checking the PDF Reference Manual, we find that 16 BPC images were introduced in PDF 1.5
So even if pdfwrite were able to upscale the 12 BPC image to a 16 BPC image (with padding), its not allowed to do so if you want to create a PDF/A-1 file, because that's not allowed by the specification. So I'm afraid you can't do what you want, you can't create a legal PDF/A-1 file with 16 BPC images using any tool.
Regarding downsampling, the default for colour image downsampling is 'false', so if you don't enable it (DownsampleColorImages=true) then the pdfwrite device won't downsample the images.
Good day,
I have large issue cropping the PDF to PNG
PDF is about 1,6MB (2500x2500) and one process takes about 7-10min and generates 700MB of temporary files.e.g.
exec("convert -density 400 'file.pdf' -resize 150% -crop 48x24# png32:'file_%d.png'");
One PDF must generate PNGs from size 25% to 200%
Here i generate attributes like density, size for resizing in % and grids row and column count
$x = 0; $y = 0;
for ($i = 25; $i <= 200; $i += 25) {
$x += 8; $y += 4;
$convert[$i] = ['density' => (($i < 75) ? 200 : ($i < 150) ? 300 : ($i < 200) ? 400 : 500), 'tiles' => implode("x", [$x, $y])];
}
After i launch converter one after one and it's extremely expensive in time.
$file_cropper = function($filename, $additional = '') use ($density, $size, $tiles) {
$pid = exec("convert -density $density ".escapeshellarg($filename)." -resize $size% -crop $tiles# ".$additional." png32:".escapeshellarg(str_replace(".pdf", "_%d.png", $filename))." >/dev/null & echo $!");
do {
/* some really fast code */
} while (file_exists("/proc/{$pid}"));
};
If i launch it simultaneously (8 processes) then ImageMagick eats all the space i have (40GB) => ~35GB of temporary files
Where is my problem, what am i doing wrong?
i tried to pass params below to functions $additional var:
"-page 0x0+0+0"
"+repage"
"-page 0x0+0+0 +repage"
"+repage -page 0x0+0+0"
nothing changes
Version: ImageMagick 6.7.7-10 2016-06-01 Q16 http://www.imagemagick.org
Copyright: Copyright (C) 1999-2012 ImageMagick Studio LLC
Features: OpenMP
Ubuntu 14.04.4 LTS
2GB / 2CPU
EDITED
After a while managed to replace ImageMagick on GhostScript
gs -dNOPAUSE -dBATCH -sDEVICE=pngalpha -r240 -sOutputFile=\"file.png\" file.pdf but can't understand how to scale image and crop it.
crop with ImageMagick generates ~35GB temporary files and takes more time than previously.
I managed to resolve my problem that way:
$info = exec("identify -ping %w {$original_pdf_file}"); preg_match('/(\d+x\d+)/', $info, $matches);
"gs -dNOPAUSE -dBATCH -sDEVICE=pngalpha -r{$r} -g{$dim} -dPDFFitPage -sOutputFile=\"{$png}\" {$filename}"
"convert ".escapeshellarg($png)." -gravity center -background none -extent {$ex}x{$ex} ".escapeshellarg($png)
"convert ".escapeshellarg($png)." -crop {$tiles}x{$tiles}! +repage ".escapeshellarg(str_replace(".png", "_%d.png", $png))
where:
$filename = file.pdf
$png = file.png
$r = 120
$ex = 4000
$dim = $matches[1]
Step:
gives me dimension of original file after what i can play with size of png in the future
converts pdf to png with size i need with aspect ratio
converts png to size i wish with aspect ratio 1:1
cropping everything
this process takes 27.59s on my machine with image resolution 4000x4000 and size of file - only 1,4MB & 0-30MB of temporary files.