I am getting following error while running cap rubber:config. Hope someone can give me directions.
Here is the command I am running: bundle exec rubber "config"
As you can see, it is complaining about missing operand for the 'mkdir -p' command at the end.
The source :rubygems is deprecated because HTTP requests are insecure.
Please change your source to 'https://rubygems.org' if possible, or 'http://rubygems.org' if not.
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/common/crontab
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/common/gemrc
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/common/ntp-sysctl.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/common/ntp.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/common/rsyslog.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/common/rubber.profile
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/common/ruby.profile
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/collectd/collectd-ping.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/collectd/collectd.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/collectd/filters.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/collectd/graphite-collectd.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/collectd/thresholds.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/collectd/types.db
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/haproxy/haproxy-base.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/haproxy/haproxy-default.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/haproxy/haproxy-passenger.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/haproxy/monit-haproxy.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/haproxy/syslog-haproxy.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/haproxy/syslogd-default.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/monit/monit-default.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/monit/monit-postfix.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/monit/monit.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/passenger_nginx/application.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/passenger_nginx/crontab
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/passenger_nginx/monit-nginx.conf
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/passenger_nginx/nginx
Rubber[INFO]: Transforming /mnt/publify-production/releases/20130814042309/config/rubber/role/passenger_nginx/nginx.conf
Rubber[INFO]: Transformation executing post config command: function error_exit { exit 99; }; trap error_exit ERR
mkdir -p
mkdir: missing operand
Try `mkdir --help' for more information.
Rubber[INFO]:
Rubber[ERROR]: Transformation failed for /mnt/publify-production/releases/20130814042309/config/rubber/role/passenger_nginx/nginx.conf
Rubber[ERROR]: Post command failed execution: function error_exit { exit 99; }; trap error_exit ERR
mkdir -p
You probably need the nginx_log_dir setting added to your rubber-passenger_nginx.yml file like I did. Pop that file open and add it:
nginx_log_dir: /mnt/nginx/logs
I hope that helps you.
EDIT:
See the accepted pull request that fixes this issue: Github Pull Request
Related
I am using gdal_rasterize and ogr2ogr with a goal to get a partial raster of .gpkg file.
With first command I want to clip a smaller area of a large map.
ogr2ogr -spat xmin ymin xmax ymax out.gpkg in.gpkg
This results in a file that with command ogrinfo out.gpkg gives expected output by listing the layers numbers and names.
Then trying to rasterize this new file with:
gdal_rasterize out.gpkg -burn 255 -ot Byte -ts 250 250 -l anylayer out.tif
results in: ERROR 1: Cannot get layer extent when tried with any of the layers names given by ogrinfo
Using the same command on the original in.gpkg doesnt give errors and results in expected .tiff file raster.
ogr2ogr --version GDAL 2.4.2, released 2019/06/28
gdal_rasterize --version GDAL 2.4.2, released 2019/06/28
This process should at the end be implemented with the gdal C++ API.
Are the commands given some how invalid this way, how?
Should the whole process be done differently, how?
What does the ERROR 1: Cannot get layer extent mean?
I am working with over thousands PDF files for a Sheet Music publisher.
All of these PDF files needs a preview PDF. A watermark for PDF files can easily be removed so I am asking for a true way to watermark our PDF:s in a batch operation.
PDF->Apply Watermark->JPG->Back to PDF
How can I do this? Is there a good tool for this operations?
The free route
ImageMagick can do the complete process for you, especially with the composite command's -watermark operator.
#!/bin/sh
# ImageMagick will pick the correct conversion formats based on filename suffixes, or maybe actual binary content?
InputPDF=$1
WatermarkImg=$2
OutputPDF=$3
pdfToImage=pdfToImage.png
imageWithWatermark=imageWithWatermark.png
# Convert PDF to image
convert \
-density 300 \
-trim \
"$InputPDF" \
-quality 100 \
-flatten \
-sharpen 0x1.0 \
$pdfToImage
# Add watermark to intermediate image
composite \
-dissolve 15 \
-tile \
"$WatermarkImg" \
$pdfToImage \
$imageWithWatermark
# Convert intermediate image back to PDF
convert \
$imageWithWatermark \
"$OutputPDF"
# Clean up
rm $pdfToImage $imageWithWatermark
I find the PDF to image conversion acceptable in terms of quality, though you can see some differences when looking at the before and after side-by-side, especially in how bolded glyphs seem less bold:
You can check this good post and its answers for a number of options for converting a PDF to an image, Convert PDF to image with high resolution.
I checked out PDFtoPPM, which was also highly mentioned in that thread, and I still see some degrading of the bolded fonts when converted:
Some more tiling Magick
I used this copyright symbol from Wikimedia Commons and this ImageMagick script:
#!/bin/sh
Infile="Copyright.png"
Outfile="Copyright_tiled.png"
h2=$(convert $Infile -format "%[fx:round(h/2)]" info:)
convert $Infile \
\( -clone 0 -roll +0+"$h2" \) \
+append \
-write mpr:sometile \
+delete \
-size 1224X1584 \
tile:mpr:sometile \
$Outfile
to create this staggered tiling (1224X1584 is the page size (8.5in x 11in) multiplied by 72 px/in, times 2 for a good density of tiles):
And here it is unwatermarked again
#ZachYoung I used some different image magic, also scriptable, the point is:-
Although "What's done cannot be undone" Macbeth (Act 5.1. 63-4) is very true especially within a PDF or image. We also know and expect that it too applies to any PDF (de)constructs. Thus depending on value of a forgery it will always be worth engineering a partially reversed copy, fit for scrutiny or use, but will like the watermarked copy, still not be the original, however all the same, may look almost just as good.
The Idiom implies don't bother yourself about it. Its best not done in the first place.
The nearest to best, is use a watermark exactly the same as the text outlines, like this:-
I am trying to merge a number of geotiffs into one large geotiff with overviews, however the final merged geotiff shows a number of horizontal artifacts around the edges of the original merged geotiffs (see here for an example).
I create the merged file using the following code:
'''Produce Combined VRT'''
string ='gdalbuildvrt -srcnodata "0 0 0 0" -hidenodata -r bilinear %s -overwrite %s' %(tmp_vrt, GDal_merge_string)
os.system(string)
'''Convert VRT to Geotiff'''
string ='gdal_translate -b 1 -b 2 -b 3 -mask 4 --config GDAL_TIFF_INTERNAL_MASK YES -of GTiff %s %s' %(tmp_vrt,tmp_fname)
os.system(string)
I have a hunch that this might have to do with using gdal_translate on a vrt, as the erorrs occur on the edges of the original geotiffs, and in this case it might be related or similar to the issue found in this post.
This code is using VRTs to combine the geotiffs for speed purposes, but perhaps it might be better to just merge these with gdalwarp?
Edit: I have reduced the number of flags and left out the overviews in the code above, as suggested in the comment below by Benjamin. The error seems to be produced in the code above. I think the issue may lie in the masking process. I guess at some point in the process of stacking the bands, the inputs are distorted. Is it generally inadvisable to gdal_translate VRTs?
Running the model out of the box generates these files in the data dir :
ls
dev-v2.tgz newstest2013.en
giga-fren.release2.fixed.en newstest2013.en.ids40000
giga-fren.release2.fixed.en.gz newstest2013.fr
giga-fren.release2.fixed.en.ids40000 newstest2013.fr.ids40000
giga-fren.release2.fixed.fr training-giga-fren.tar
giga-fren.release2.fixed.fr.gz vocab40000.from
giga-fren.release2.fixed.fr.ids40000 vocab40000.to
Reading the src of translate.py :
https://github.com/tensorflow/models/blob/master/tutorials/rnn/translate/translate.py
tf.app.flags.DEFINE_string("from_train_data", None, "Training data.")
tf.app.flags.DEFINE_string("to_train_data", None, "Training data.")
To utilize my own training data I created dirs my-from-train-data & to-from-train-data and add my own training data to each of these dirs, training data is contained in the files mydata.from & mydata.to
my-to-train-data contains mydata.from
my-from-train-data contains mydata.to
I could not find documentation as to using own training data or what format it should take so I inferred this from the translate.py src and contents of data dir created when executing translate model out of the box.
Contents of mydata.from :
Is this a question
Contents of mydata.to :
Yes!
I then attempt to train the model using :
python translate.py --from_train_data my-from-train-data --to_train_data my-to-train-data
This returns with an error :
tensorflow.python.framework.errors_impl.NotFoundError: my-from-train-data.ids40000
Appears I need to create file my-from-train-data.ids40000 , what should it's contents be ? Is there an example of how to train this model using custom data ?
blue-sky
Great question, training a model on your own data is way more fun than using the standard data. An example of what you could put in the terminal is:
python translate.py --from_train_data mydatadir/to_translate.in --to_train_data mydatadir/to_translate.out --from_dev_data mydatadir/test_to_translate.in --to_dev_data mydatadir/test_to_translate.out --train_dir train_dir_model --data_dir mydatadir
What goes wrong in your example is that you are not pointing to a file, but to a folder. from_train_data should always point to a plaintext file, whose rows should be aligned with those in the to_train_data file.
Also: as soon as you run this script with sensible data (more than one line ;) ), translate.py will generate your ids (40.000 if from_vocab_size and to_vocab_size are not set). Important to know is that this file is created in the folder specified by data_dir... if you do not specify one this means they are generated in /tmp (I prefer them at the same place as my data).
Hope this helps!
Quick answer to :
Appears I need to create file my-from-train-data.ids40000 , what should it's contents be ? Is there an example of how to train this model using custom data ?
Yes, that's the vocab/ word-id file missing, which is generated when preparing to create the data.
Here is a tutorial from the Tesnorflow documentation.
quick over-view of the files and why you might be confused by the files outputted vs what to use:
python/ops/seq2seq.py: >> Library for building sequence-to-sequence models.
models/rnn/translate/seq2seq_model.py: >> Neural translation sequence-to-sequence model.
models/rnn/translate/data_utils.py: >> Helper functions for preparing translation data.
models/rnn/translate/translate.py: >> Binary that trains and runs the translation model.
The Tensorflow translate.py file requires several files to be generated when using your own corpus to translate.
It needs to be aligned, meaning: language line 1 in file 1. <> language line 1 file 2. This
allows the model to do encoding and decoding.
You want to make sure the Vocabulary have been generated from the dataset using this file:
Check these steps:
python translate.py
--data_dir [your_data_directory] --train_dir [checkpoints_directory]
--en_vocab_size=40000 --fr_vocab_size=40000
Note! If the Vocab-size is lower, then change that value.
There is a longer discussion here tensorflow/issues/600
If all else fails, check out this ByteNet implementation in Tensorflow which does translation task as well.
I issue the following command:
gs \
-o downsampled.pdf \
-sDEVICE=pdfwrite \
-dDownsampleColorImages=true \
-dColorImageResolution=180 \
-dColorImageDownsampleThreshold=1.0 \
And get the following errors:
Subsample filter does not support non-integer downsample factor (1.994360)
Failed to initialise downsample filter, downsampling aborted
(on some pages)
and:
Subsample filter does not support non-integer downsample factor (2.000029)
Failed to initialise downsample filter, downsampling aborted
Originally I tried to downsample to 150dpi, which gave the error with factor (2.40????), meaning multiple errors, where the last few digits are different for different pages. So I guessed that images are approximately 150*2.4 = 360 dpi. So I try downsampling to 180. But it seems the images are all slightly off?
Is there a way to specify the factor instead of the dpi?
Is there a way to "round" the factor?
No, there is no way to specify the factor (this is the Adobe specification for distiller params, we are currently limited to those). You cannot specify an approximation for rounding either, without modifying the source code.
You can use a different downsampling algorithm.
[much later]
In fact I just checked the current code, and you must be using an old version of Ghostscript.
The current default downsampling filter is the Bicubic filter, and if you do force the Subsample filter, then the code checks to see if the downsample factor requested is an integer.
If the factor is not an integer but is within 0.1 of an integer then it forces factor to the nearest integer.
If its outside 0.1 of an integer factor then it aborts the subsample filter and switches to Bicubic.
I'd recommend upgrading.
[later edit]
So avoiding the bogus ColorDownsampleOption, the problem is actually not colour images at all, its monochrome images, or more precisely in your case, imagemasks.
I set up this command line:
gs
-sDEVICE=pdfwrite \
-sOutputFile=pdfwrite.pdf \
-dDownsampleColorImages=true \
-dDownsampleGrayImages=true \
-dDownsampleMonoImages=true \
-dColorImageDownsampleThreshold=1 \
-dGrayImageDownsampleThreshold=1 \
-dMonoImageDownsampleThreshold=1 \
-dColorImageDownsampleType=/Bicubic \
-dGrayImageDownsampleType=/Bicubic \
-dMonoImageDownsampleType=/Bicubic \
-dColorImageResolution=72 \
-dGrayImageResolution=72 \
-dMonoImageResolution=100 "gs sample.pdf"
And that produces an error message that the only filter available for monochrome images is Subsample, followed by the error messages you quote about the imprecise factor.
I guess basically this makes my point that an example file is pretty much vital in order to investigate problems.
So there is a problem there, and I will look into it, obviously for monochrome images it should be clamped to the nearest integer resolution, since no other filter is possible. However, Gray and Colour images do work as expected.
Reporting a bug, as I suggested in an early comment would probably have got to this point much sooner. I'd still suggest you do that, so that this is not overlooked.
You may be interested to note that, for me, the resulting file when I don't downsample monochrome images, but do downsample the others, as per the command line above, is 785KB the original being 2.5MB.