wget downloading only PDFs from website - pdf

I am trying to download all PDFs from http://www.fayette-pva.com/.
I believe the problem is that when hovering over the link to download the PDF chrome shows the URL in the bottom left hand corner without a .pdf file extension. I saw and used another forum answer similar to this but the .pdf extension was used for the URL when hovering over the PDF link with my cursor. I have tried the same code that is in the link below but it doesn't pick up the PDF files.
Here is the code I have been testing with:
wget --no-directories -e robots=off -A.pdf -r -l1 \
http://www.fayette-pva.com/sales-reports/salesreport03-feb-09feb2015/
I am using this on a single page of which I know that it has a PDF on it.
The complete code should be something like
wget --no-directories -e robots=off -A.pdf -r http://www.fayette-pva.com/
Related answer: WGET problem downloading pdfs from website
I am not sure if downloading the entire website would work and if it wouldn't take forever. How do I get around this and download only the PDFs?

Yes, the problem is precisely what you stated: The URLs do not contain regular or absolute filenames, but are calls to a script/servlet/... which hands out the actual files.
The solution is to use the --content-disposition option, which tells wget to honor the Content-Disposition field in the HTTP response, which carries the actual filename:
HTTP/1.1 200 OK
(...)
Content-Disposition: attachment; filename="SalesIndexThru09Feb2015.pdf"
(...)
Connection: close
This option is supported in wget at least since version 1.11.4, which is already 7 years old.
So you would do the following:
wget --no-directories --content-disposition -e robots=off -A.pdf -r \
http://www.fayette-pva.com/

Related

Can I download files with Selenium?

I've managed to locate the correct element on the web page and I can click it. I see the download prompt come up, asking me whether I should open the pdf, open it in Firefox, or save it.
My code looks like this:
my $profile = Selenium::Firefox::Profile->new;
$profile->set_preference(
"browser.download.dir" => "/Users/john/Downloads",
"browser.download.folderList" => "1",
"browser.helperapps.neverAsk.SaveToDisk" => "application/pdf"
);
my $driver = Selenium::Firefox->new(
'binary' => "/usr/local/bin/geckodriver",
'firefox_profile' => $profile
);
[...]
$driver->find_child_element($driver->find_element_by_id('secHead_1'), "./a[\#class='phoDirFont']")->click();
My understanding is that if I've set up the correct preferences, then this file would save without the prompt. That's not happening though. I've dug down into it with dev tools, and it does seem to be serving up the pdf file with "application/pdf" as the mime-type. Firefox certainly recognizes it as one (offering to open it in Firefox, not just with the registered app).
If there is another method (perhaps by sending keystrokes to the prompts), that would be acceptable too. Though I've been using Firefox (trying to get away from Google products in my personal life), if Chrome would make a difference I could switch to that instead.
Looking at my about:config (with the Selenium-opened window), the preferences settings seem to have taken. However, it still prompts for the file.
According to https://www.selenium.dev/documentation/en/worst_practices/file_downloads/ file downloads are not supported in selenium.
You could try using curl instead eg:
system("curl -s -o output.pdf -L $URL");
where -s stands for silent, -o stands for where to save the file and -L tells curl to follow redirect
If you need cookies you can obtain them with:
my #cookies = $driver->get_all_cookies();
extract whichever cookies you need and then pass them to curl with a --cookie parameter, like so:
system("curl -s -o output.pdf -L --cookie $cookie_one --cookie $another_cookie $URL");

Converting docx to PDF/A with libre office writer

I am happily converting docx files to PDF via the command line (controlled via C# process calls) out of my service.
Unfortunately I could not find any internet search results on how to set the options for the output PDF that the GUI offers me. I am specifically looking for generating PDF/A and tagged PDF via the command line.
Anyone ever done this and knows how to do that?
EDIT:
Obviously getting a PDF/A can be done by using unoconv instead.
On windows one would use the following command line in a checked out unoconv repository:
python.exe .\unoconv -f pdf -eSelectPdfVersion=1 C:\temp\libre\renderingtest.docx
I did not find further information on how to select other things (tagged PDF etc.) and where to get a complete list of the options that are available.
EDIT: It seems as one could try the different options in the GUI. The settings get saved to C:\Users\<userName>\AppData\Roaming\LibreOffice\4\user\registrymodifications.xcu. Then one can look up the changed setting and provide that to unoconv as this:
python.exe .\unoconv -f pdf -eUseTaggedPDF=1 -eSelectPdfVersion=1 C:\temp\libre\renderingtest.docx
Still not sure if I am doing this correctly though.
The gotenberg project shows how that can be done using unocov.
$ curl --request POST 'http://localhost:3000/forms/libreoffice/convert' --form 'files=#"doc.docx"' --form 'nativePdfFormat="PDF/A-1a"' -o pdfA.pdf
Example PDF

Specify default substitution font when converting pdf to image using imagemagick and font is missing

I am using Spatie/pdfToImage that builds on ghost script and imagemagick to on my server:
Take a multiple page pdf from an email using mailgun routing.
Save the pdf in folder /docs_pdf like file.pdf
Use a foreach to loop through each page and save each page as a png to /docs like file_#.png
locally where I use laravel -> valet everything works fine.
On my server using digital ocean through laravel forge the language in a multipaged pdf that is in swedish transforms from normal swedish to a bunch of random letters and signs.
The left is correct (yes, its true. Its Swedish) and the right is wrong:
Someone suggested to me that this is probably a matter of the font missing on the server. The fonts used in the pdf:
<</StemV 68/FontName/PSQHMO+FoundrySans-Normal/FontFile2 216 0 R/FontStretch/Normal/FontWeight 400/Flags 32/Descent -240/FontBBox[-40 -240 960 916]/Ascent 916/FontFamily(FoundrySans-Normal)/CapHeight 667/XHeight 465/Type/FontDescriptor/ItalicAngle 0>>
<</StemV 100/FontName/MLHPWU+FoundrySans-Medium/FontFile2 217 0 R/FontStretch/Normal/FontWeight 400/Flags 32/Descent -241/FontBBox[-42 -241 1008 916]/Ascent 916/FontFamily(FoundrySans-Medium)/CapHeight 667/XHeight 470/Type/FontDescriptor/ItalicAngle 0>>
<</StemV 68/FontName/SUEECI+FoundrySans-Normal/FontFile2 218 0 R/FontStretch/Normal/FontWeight 400/Flags 4/Descent -240/FontBBox[-40 -240 960 916]/Ascent 916/FontFamily(FoundrySans-Normal)/CapHeight 667/XHeight 465/Type/FontDescriptor/ItalicAngle 0>>
<</StemV 48/FontName/KIDDUY+FoundrySans-Light/FontFile2 9 0 R/FontStretch/Normal/FontWeight 400/Flags 32/Descent -248/FontBBox[-28 -248 978 924]/Ascent 924/FontFamily(FoundrySans-Light)/CapHeight 667/XHeight 458/Type/FontDescriptor/ItalicAngle 0>>
Here is configuration of fonts in imagemagick and ghostscript:
https://www.imagemagick.org/script/resources.php
how can this be solved?
Update:
I have now made a clean install on a new server.
Installed Imagick and spatie/pdfToImage
As suggested by KenS I ran
gs -sDEVICE=png16m -o out%d.png
terminal output
forge#Server:~/app/storage/app/public/files$ gs -sDEVICE=png16m -o test_out%d.png file.pdf
GPL Ghostscript 9.22 (2017-10-04)
Copyright (C) 2017 Artifex Software, Inc. All rights reserved.
This software comes with NO WARRANTY: see the file PUBLIC for details.
Processing pages 1 through 2.
Page 1
Page 2
the document rendered the same = wrong.
I am at a complete loss.. Don't know what next step might be..
Update2:
I also run the convert imagemagick commando and the img rendered the same way also.
So even if I do it with ghostscript solo, imagemagick or spatie/pdfToImage it gives me the same output
Well, the current version of Ghostscript (9.25) renders this acceptably for me; that is the text appears to be correct. All the fonts are embedded, so there shouldn't be any problems.
And this means that even if you did replace the default font substitution, it wouldn't help, because Ghostscript shouldn't be using the default font, it will be using the fonts embedded in the PDF file.
Without knowing what version of Ghostscript you are using (I see from a later comment that its 9.25), or the command line that is used to start it, I can't really do a like-for-like comparison. Its hard for me to see how you could be getting such a different result though. That looks like Ghostscript has failed to find the embedded fonts.
Its possible that whatever package you are using has done something 'unfortunate'. The various package maintainers on Linux add their own patches, and sometimes modify the way that Ghostscript is built. Possibly that has broken something.
If you are able to build Ghostscript yourself you could try cloning our Git repository and doing that. You could also try downloading the Linux binaries off our website. They won't work with every Linux distribution (different ABI) but you can try, you might be lucky.
You could also try running Ghostscript directly on the PDF file. Something like:
gs -sDEVICE=png16m -o out%d.png
should produce 2 PNG files, out1.png and out2.png. It will also produce a bunch of stuff on the terminal. That back channel output is valuable information for me so if you can reproduce the problem, I'd like to see that too.
One last thought; its possible to have more than one version of Ghostscript installed simultaneously, its possible that your current setup is using an old version of Ghostscript.
I can't help you with ImageMagick or Spatie, but if you can debug those to the point where you can reproduce the problem with a plain Ghostscript command line then I can look further at it.
Finally got it to work. I want to first give kudos to KenS that really helped me, and without him it would not have worked.
This is what I did:
1 - I removed Ghostscript:
sudo apt-get purge --auto-remove ghostscript
then
wget https://github.com/ArtifexSoftware/ghostpdl-downloads/releases/download/gs925/ghostscript-9.25.tar.gz
tar xvf ghostscript-9.25.tar.gz
Enter the unpacked folder and do
./configure
make
make install
then
sudo ln -s /usr/local/bin/gs /usr/bin/gs
On top of the above I did:
sudo add-apt-repository ppa:glasen/freetype2
and then:
sudo apt update && sudo apt install freetype2-demos

How to download CSV file with poltergeist using Capybara on phantomjs?

For a integration test, I need to download a CSV file using poltergeist driver with Capybara. In selenium(for example firefox/chrom webdriver), I can specify download directory and it works fine. But in poltergeist, is there a way to specify the download directory or any special configuration?. Basically I need to know how download stuff works using poltergeist,Capybara, Phantomjs.
I can read server response header as Hash using ruby but can not read the server response to get the file content.Any clue? or help please.
Finally I solved the download part by simply using CURL inside Ruby code without using any webdriver. The idea is simple, first of all, I submitted the login form via CURL and saved the cookie into my server and then submitted(via CURL) the CVS Export form using the saved cookie like this
post_data = "p1=d1&p2=d2&p3=d3"
`curl -c cookie.txt -d "userName=USERNAME&password=PASSWORD" LOGIN SUBMIT_URL`
csv_data = `curl -X POST -b cookie.txt -d '#{post_data}' SUBMIT_URL_FOR_DOWNLOAD_CSV`

Batch OCRing PDFs that haven't already been OCR'd

If I have 10,000 PDFs, some of which have been OCRed, some of which have 1 page that has been OCRed but the rest of the pages have not, how can I go through all the PDFs and only OCR the pages that haven't already been done?
This is exactly what I was looking for, I have thousands of scanned PDF files, where some were already OCR'ed and some are not.
So, I combined information I found on fora and Stack Overflow, and made my own solution that does EXACTLY that, which I have summarized for you here:
scan through all subdirectories recursively for PDF files;
check if the PDF was already OCR'ed, and if not, process the PDF with OCR with high quality, in the language(s) you can specify;
save the OCR PDF in-place, as PDF/A, and overwriting the old (not-OCR'ed) one.
I am on Windows 10, and could not find the definitive answer. I tried doing this with Acrobat Pro, but that gave me many errors, and Acrobat's batch processing stops on every error or password-protected file. I also tried many other batch-OCR tools on Windows, but none worked well.
I spent countless hours manually checking which files already had a text-layer "under" the image.
UNTIL! Microsoft announced that it was now very easy to run Linux under Windows, on the same machine, on the same filesystem.
There are many more tools and utilities available on Linux than Windows, so I thought I would give that a try.
So, here it is, step by step:
Enable the Windows subsystem for Linux in the Windows Control Panel; there are many guides. Google it. It's a couple of minutes.
Install Linux from the Windows Store. Open the Windows Store, search for Ubuntu, and install. Takes around 5 minutes.
Now you have the "Ubuntu app". Run it. It shows you the linux bash, and with file access to your Windows files through /mnt/c. It's magic!
You need some Linux "apps", namely pdffonts and ocrmypdf; which you can install by using the command sudo apt install pdffonts and sudo apt install ocrmypdf. We will use these apps to check if there is an embedded font in a PDF, and if not, OCR the PDF. (see note below).
Install the very small bash script (below) to your home directory ~.
Go to (cd) the directory where all your PDF's are saved. For example: /mnt/c/Users/name/OneDrive/Documents.
Run the command: find . -type f -name "*.pdf" -exec /your/homedir/pdf-ocr.sh '{}' \;
Done!
Running this might, of course, take a long time, depending on how many PDF's you have, and how many of those are not OCR'ed yet.
Here is the sh-script. You should save it somewhere in your home folder so that it is easy to call from anywhere. Like so:
type cd ~. This will bring you to your home folder.
type pico pdf-ocr.sh. This will bring up an editor. Paste the below script code. Then press Ctrl+X, and press Y. Your file is now saved.
type sudo chmod +x pdf-ocr.sh. This will give the script permission to be run.
MYFONTS=$(pdffonts -l 5 "$1" | tail -n +3 | cut -d' ' -f1 | sort | uniq)
if [ "$MYFONTS" = '' ] || [ "$MYFONTS" = '[none]' ]; then
echo "Not yet OCR'ed: $1 -------- Processing...."
echo " "
ocrmypdf -l eng+deu+nld -s "$1" "$1"
echo " "
else
echo "Already OCR'ed: $1"
echo " "
fi
What does this do?
Well, the find command looks up all PDF files in the current directory including subdirectories. It then "sends" these files to the script, in which pdffonts checks if there are embedded fonts. If so, skip the file and try the next one. If no embedded fonts are found, use ocrmypdf to do the OCR-ing.
I found the quality of OCR from ocrmypdf VERY good, even better than Acrobat's. You can of course tweak the settings. I can imagine for example that you might want to use other languages for OCR than eng+deu+nld. You can look up all options here: https://ocrmypdf.readthedocs.io/en/latest/
Note: I am making the assumption here that if a PDF file has no embedded fonts (so it's basically an image (scan) in a PDF-file), that it has not OCR'ed. I know that this might not always be accurate and/or true, but for me that is enough to determine which files to put through OCR. So that it is not neccesary to re-do hundreds or thousands of PDF files....
I know that it is a bit more hassle to install Linux under Windows, but as it is very easy to do if you have basic Linux skills. For me it was worth the effort because I now have made "one click" batch processor that works. I could not find a solution for that with Windows-tools.
I hope someone finds this and finds this useful. If anyone has improvements, please post them here.
Thanks.
Jos Jonkeren
Why don't you re-OCR everything? The amount of time you spend agonizing over repeated work probably exceeds the time taken for the work itself.
If by OCRed you mean that they contain the text in machine-readable form, you could use a library like Apache PDFBox to try to extract the text from the second page of the document. If it throws an error or returns garbage, it's most likely not OCRed.
Unburying this thread.
You can know which PDF files have already been OCRed by testing them with pdffonts. If there are embedded fonts, it's very probable that the PDF is already OCRed.
As for the batch processing, I wrote a little script that can batch OCR to pdf/word/excel/csv output format.
You may find it at https://github.com/deajan/pmOCR
pmOCR (poor man's OCR is a wrapper for Abbyy OCR CLI for linux or Tesseract 3 open source solution).