trying to use wkhtmltopdf in terminal to get a webpage into PDF; follow links in example.com and save them. So instance, any hyperlinks in the page when clicked should jump to a local saved page/area that the online points to.
Here is what I got so far. Only saves the fist page.
wkhtmltopdf --outline --outline-depth 2 http://example.com g2oogle.pdf
Related
I have PDF file when i open, it does not show the page numbers. When i do zoom IN or Zoom out, it start showing the page numbers.
I have checked Page zoom setting it is already 100%.
Other browsers working fine regarding this issue.
What should i do ?
Thanks
I have tried to view the local PDF file and the webpage which contains the PDF file using the legacy version Microsoft Edge and the latest version Microsoft Edge, they all showing the page number.
How do you view the PDF file in the Microsoft Edge, Directly open the Local PDF file using Microsoft Edge or view the PDF file in a web page? If you are viewing the PDF file on the web page, perhaps the issue is related to the code. Can you post the related code to reproduce the problem.
Besides, the issue might be related to the browser cache, you could try to clear the browser data (cache), and then recheck whether it works. If still not working, try to reset or repair Edge browser.
I run an apache webserver on ubuntu 16.04
I am creating many kml files on my server and I want the user to click a link that will automatically open the kml on google maps (and not download it)
e.g. click to open map doesn't open the file, it only downloads it.
If I host the file on google drive and get the shareable link to it, and I use this link everything works great (I can use it as a simple href in a HTML page).
Unfortunately, doing it through google drive is not a feasible option for me (too many files, high update rate).
Is there any other way I can do it with local file?
I installed an SSL Certificate which required me to switch all HTTP Links over to HTTPs links. I did this by downloading all site files (including the database) and did a find/replace, thus replacing all http:// with https:// then I uploaded the new site files (and .sql file) to the server. And everything appeared to be working. Except that .png images wont load in any browser.
I can't even pull up the image by typing in the direct link in the address bar (with or without the https). Previously the images worked fine, they now just show the red X.
Any ideas on what's going on and how can I fix it?
The site is built with Joomla 2.5. You can see it here: https://www.detourjournal.com/ (Note the two .png images in the footer that are not loading)
It's not your Joomla! setup, for starters it doesn't appear to be Joomla! redirecting calls to SSL in normal pages and it doesn't affect direct file URL for images this way.
You appear to have configured your server to force https (possibly via .htaccess).
It's also not specific to PNG files as your logo, /images/stories/logo2.png is being served without a problem. The same goes for the PNG's in VirtueMart e.g. the close label.
Looking at the header for those images the sizes don't match what is being returned... so it's most likely corrupt images cause Apache to bork.
Is there a setting for apache or .htaccess to not open images in browser, but instead force the user to download them to their computer to open e.g. when he navigates to http://site.com/image.jpg this will make him download the file. The only time I want images loaded in the browser is when they're embedded in a HTML page. e.g. http://site.com/mypage.html
If it is not possible then can we at least just block it completely if they go to http://site.com/image.jpg, they will get error 403 or something for any file other than html and php?
There would be a bit of a performance overhead, but you could make a page (php or whatever language) that all it does it pull up images from a directory that otherwise isn't web accessable. You could then make all image links go to that page and make them still look like image urls using rewrites.
Page: /images/25.jpg => /images.php?id=25&type=jpg or something similar
Note sure exactly what you are trying but might want to read this:
http://michael.theirwinfamily.net/articles/csshtml/protecting-images-using-php-and-htaccess
Need to know if there is a crawler/downloader that can crawl and download and entire website with at least a link depth of 4 pages. The site I am trying to download has java script hyperlinks that are rendered only by a browser and thus the crawler is unable crawl these hyperlinks unless the crawler itself renders them!!!
Ive used Teleport Pro and it works well
Metaproducts Offline Explorer boasts doing what you need.