Download image from a lot of pages that located on a one website - automation

I need to download image from a lot of pages that located on a one website.
More specific:
I get a list of url's from one website.
Image on every page have the same class name.
So I need to download this image from every page.
How to automate this work? Please, help me to create a script.
Thanks!

try with Wget : http://en.wikipedia.org/wiki/Wget
You have to create a bash script on linux. Maybe search on google to find a script wich is already done.

Related

Where and how can I copy flash-animated html content inside a Lotus Notes database in order to invoke from a document or page?

I know Domino as a web server. In the data folder thereĀ“s an HTML folder which is normally the root of web content. But I want to put files and folders with all the flash stuff inside a Notes database (nsf).
I know I can do this via import through Package Explorer in the Notes Designer client. But where should I copy it into? Into webcontent folder?
I also know documents and pages can act as .html files. But there are also javascripts, .swf, .mp3 and other files.
How can I then invoke the starting point file like index.html?
I appreciate any helpful answer .. :-)
I know I can do this via import through Package Explorer in the Notes
Designer client. But where should I copy it into? Into webcontent
folder?
I would not recommend to put it on Domino server file system, it's very dirty way.
One document with one attachment/video. That would give you overview of all files you have in database plus possibility to add/delete them (require development skills little bit).
You can also upload your resources as Files as a design element, however that would require you design access each time you want to change them.
About invoking. Have a look here: Domino URL Commands
Also check this link: URL commands for opening attachments, image files, and OLE objects
For everything that is not plain html you can use file- resources. Linking them is as simple as write an url that looks like this:
hxxp://yourserver.com/yournsf.nsf/JSLibrary.js?OpenFileResource
You can find all possible URL- commands in your designer help or in this URL cheat sheet

media folder suddenly empty

I'm trying to upload some images in my wordpress backend, but it fails every time. It could be permission issues because I have had trouble with that earlier in the project. But the weirdest thing is that I see on my server that the media folder is suddenly empty, while all the media in the wordpress backend is still there and the website is running fine. Anybody got an idea how this could have happened?
Try to edit one of the picture in the media on admin dashboard, there will be a text field titled 'File URL' on the right side.
Paste that into address bar and see if the browser loads it correctly. If yes then the problem is on the folder permission (or you might have opened the wrong directory), if not, you might wanna try to clear the browser cache or use another platform to open it.
Hope it helps!

Dont open images in browser force download if navigated directly on image

Is there a setting for apache or .htaccess to not open images in browser, but instead force the user to download them to their computer to open e.g. when he navigates to http://site.com/image.jpg this will make him download the file. The only time I want images loaded in the browser is when they're embedded in a HTML page. e.g. http://site.com/mypage.html
If it is not possible then can we at least just block it completely if they go to http://site.com/image.jpg, they will get error 403 or something for any file other than html and php?
There would be a bit of a performance overhead, but you could make a page (php or whatever language) that all it does it pull up images from a directory that otherwise isn't web accessable. You could then make all image links go to that page and make them still look like image urls using rewrites.
Page: /images/25.jpg => /images.php?id=25&type=jpg or something similar
Note sure exactly what you are trying but might want to read this:
http://michael.theirwinfamily.net/articles/csshtml/protecting-images-using-php-and-htaccess

Is there a webcrawler that can download an entire site?

Need to know if there is a crawler/downloader that can crawl and download and entire website with at least a link depth of 4 pages. The site I am trying to download has java script hyperlinks that are rendered only by a browser and thus the crawler is unable crawl these hyperlinks unless the crawler itself renders them!!!
Ive used Teleport Pro and it works well
Metaproducts Offline Explorer boasts doing what you need.

saving copy of netflix new website

I am trying to dig into Netflix 's new website. Firebug and such tools are helpful of course, but I'd like to really get in there and play with it. Can anyone suggest a way to get a local copy on my computer? Tried basic wget, but I only get the download page. Tried using a name:pass as part of the URL. Also tried combining a curl command in the terminal with Wget.
Appreciate it!
Try saving the page using keyboard shortcut CTRL + S or click Save page as... from your browsers menu.