Pulling in different images to css based on requesting URL - background

I have a website which specify's a background image from the CSS.
The client has asked us if it would be possible to have different backgrounds for each page (category technically). let call their site clients.com
Unfortunately we don't own the website, simply manage it for a client and as such have very limited access... We can update the CSS but not much in the way of HTML (or .aspx technically).
My idea was, we may be able to specify the background image source from somewhere else, perhaps another server at resources.clients.com Ideally the other server would return different images based on the URL which requested the data. Perhaps I'd have a database of URLs and the returned image file, with unspecified URLs loading a default...
Any thoughts on how this might be achieved, or other suggestions would be greatly appreciated. I am intrigued about how this could be done on a personal level, not just for this project/clients sake.
Thanks :D

I'm afraid your idea won't work. There are several reasons for that. The request for resources mentioned within a style sheet file is made by the page visitor's browser. This browser won't send any information on which page was visited.
You'd be better off asking the developers of the application to add category specific CSS classes to the body of the page. So you can define according styles.
Another reason why your approach won't work is browser caching. Referencing one image in a CSS file without any other information will make the browser cache this file (I suppose caching headers are not disabled on the application server). So when the visitor is heading for another category page, the browser will still serve the picture loaded for the previous category.

Related

DokuWiki: Mix content from human with content from automated process

We run DokuWiki.
We have one page for every server.
We want to mix automated content (like number of CPUs) with content created by human beings by hand and keyboard.
What is an easy and not so "dirty" way to solve this?
Include generated pages and their sections into user-maintained pages or vice versa. As a benefit you will be able to forbid user access to generated pages(namespaces) via ACL.
Use plugins like data or sqlite to include smaller pieces of information on the page.
It might be enough to have discussions available for generated pages.

How to move html with a tracking pixel to a different server?

There is a web page with tracking pixels (and probably other tracking code) that I would like to move to my own server. Viewing the page on the current server generates a page view as expected, but when I move the html (exactly, without any changes) to my own server, the hits are not registered. I'm trying to understand why - is it possible that the tracking mechanism can determine the IP address of the requesting page?
Thanks for any advice.
If there's an external js file involved in the tracking, you'll need to add that too.

Redirection Before Page Load

I have an injection script--a start script--whose ultimate goal is to redirect to a different URL. That injection script needs to access the extension settings, so it sends a message to a global HTML file. That global file checks the settings and redirects to the appropriate URL by setting the safari.application.activeBrowserWindow.activeTab.url property.
What I'm finding is that all too often, the interim page loads first making for an annoying UX at best and introducing errors at worst. I'm assuming that this is a result of the asynchronous nature of messaging, but I haven't been able to find a way to stop it.
Is there any way to prevent the default behavior (loading the originally requested page) while still reading from extension settings?
Thanks.
It looks like this simply isn't possible given the current state of the Safari extension API.

Can search engines index pages generated by server side code?

I'm guessing a site like stack overflow doesn't keep an html file around for every question ever asked. Instead, server-side code creates the page every time a question is clicked on(I think). Is it possible for search engines to index every quesiton on Stack Overflow, or would a page-per-question need to be kept in the directory so the search engine can crawl it?
Yes. Search engines can index dynamically generated pages no problem. In fact, from the search engine bot's perspective, it can't really even distinguish between a dynamically generated page and a static one.
You might be interested by the Dynamic URLs vs. static URLs post on the Official Google Webmaster Central Blog.
Yes it's perfectly possible - when a link is followed the server returns HTML just like any other web page. The only difference is that the server generated it, rather than a person.
As far as the client (be it a browser or search engine) is concerned, there is no difference between a server-generated page and a static file. They're virtually indistinguishable (depending on how the page is generated, it might be missing Last-Modified headers, etc). As such, yes, search engines can index generated pages without a problem.
That said, there is something to be said for giving them a hint. Using sitemaps, for example, gives a search engine a nice listing of all your pages, so it's less likely to miss them. More importantly, it can summarize last modified times, to focus the search engine's attention on what has changed recently. This isn't mandatory, but it does help - regardless of whether the pages are static HTML or generated.
Any link that uses a GET can be followed by most crawlers. Anything that requires a POST will generally be ignored.
The mechanism for generating the page is irrelevant.
yes if this is not restricted by robot.txt or meta tags.Search engine requests web page like normal user,no one have access to server side code(if your site isn't hacked))
Search engines can see pretty much anything on a given Web page that isn't hidden behind client-side code (i.e., JavaScript).
So, if there's a URL that you can enter into your browser's address bar to get this page, and this page is linked to from somewhere, a search engine will find it and "see" the same content that you do. The fact that the page was generated dynamically by a server is irrelevant to a search engine, since what is sent to a browser upon requesting a URL is still just an HTML file.
In other words, that HTML file doesn't exist in the same form on the server - i.e., it's actually some server-side code that generates HTML, not a static HTML file - but that's not what a search engine is crawling though and indexing, rather links to document URLs that are exactly what you see in your browser's address bar.

Refresh browser via cron(or not) to a different page on remote request?

I need to display pages in a tutorial fashion. I looked in to netsupport, beamyourscreen and other possibilities but, I do not want the viewers to download anything. I cannot use gd / send screenshots due to audio / video instructions embedded in some of the pages.
Basically, I need the ability to "refresh" a users browser window to a different page via an interface on my end. Whether via a form submission, javascript or any other type of "controller" that allows me to change the page on the viewers browser. PERL preferred but, PHP / javascript whatever works and is cross browser. I set up a simple javascript page forward timer that "works" but, page load times and conversation interruptions are a huge factor.
The entire tutorial website will be developed around this ability.
I was looking in to curl / cron / wget methods but, found little information.
I have seen forum and chat scripts that basically perform a similar task but, there must be a simple(ish) solution in leau of hacking up another script to suit my needs.
I do not want others to control the pages either. The site really, only needs to be accessable during the tutorial however, It "could" remain web accessable as long as user interaction was normal unless (being controlled).
The initial site concept is based on instructing people how to properly introduce new pets into a home. Will be operated by a veteranarian that saved my pets life. I wanted to give something back.
Possible? I really appreciate simple examples etc...
You have no other way but to keep polling the server for "instructions" using javascript. No, you can't send nothing to the end user browser, neither curl nor wget.
Mainly, you'll have to set up a simple request/response protocol between the browser and the server.
If you want to go deeper, you can use something like cometd/meteord/etc. If not, a hidden iframe that reloads himself and receives pages with javascript code for the needed actions can do the trick.
Another alternative.
With javascript dopolling and single character flatfile. Have a simple one character flatfile with a single var. Write it in perl (it is faster and uses less resources than php). The parent script calls a javascript variable in a flatfile. It hits the flatfile and goes wherever the var sets it. The flatfile is written to by the controller. Done.
I guess you could also rename an empty flatfile and use that as the controller. I am usure which is faster, open and read a specific file or hit the directory and return the file name. On the controller side, opening and writing to a file vs renaming a file. Maybe they counter each other in resources and time?
This way the site can act as a normal site. When you want to have remote users see a "presentation" (automatically being shown the site pages at the controllers pace), the controller activates polling and tells the viewers to push a start button. This allows a remote instructor to load pages for the viewers at his leisure.
It is a simple solution that works with nothing really sophisticated going on. No frames are needed either. Just need javascript enabled.
Any better suggestions are welcome!
It occurred to me that what you might want to use is HTML Push technology. Check out the wiki, they have several links. I have never used it myself