CGI site , moscow ml problem - cgi

i'm using Moscow ML combined with CGI. I have a site that calculates simple arithmetics. When the submit button i pushed, the site is redirected to the actual CGI file that outputs the result of the calculation. Although in my case, it outputs the html code in raw form rather then actually outputting the result in html. Anybody that knows a solution to this problem ? // thx beforehand

It sounds like you are missing a content-type http header such as:
Content-type: text/html; charset=utf-8

Related

HERE API request returns view instead of JSON

I'm trying to use the Places (Search) API (or any HERE API for that matter) for the first time. I tried the example in this page and got this page as a response:
https://places.cit.api.here.com/places/v1/browse?app_id=YOUR-APP-ID&app_code=YOUR-APP-CODE&in=52.521,13.3807;r=2000&cat=petrol-station&pretty
I don't know what I'm missing here but, I expected to get a plain JSON/XML response (orange rectangle), like this example of the Routing (Isoline) API which outputs this to the browser:
https://isoline.route.cit.api.here.com/routing/7.2/calculateisoline.json?mode=fastest;car;traffic:disabled&jsonAttributes=1&rangetype=time&start=34.603565,-98.3959&app_id=YOUR-APP-ID&app_code=YOUR-APP-CODE&range=1800
this way I could consume the service using the JSON/XML output.
I already tried both examples above in my Spring (JAVA) app and the second one works as expected, while the first one throws error (as expected).
HERE API newbie here, help appreciated.
PS: this project app of mine is just about finding nearby POIs (e.g: gyms, coffee shops, pharmacies, cinemas, etc). So, any suggestions on what API is best to go with for this, also very appreciated.
I don't know the HERE api but from having a quick look it seems they generate your results depending on the Accept header in your http request.
By default my chrome sends this accept header:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
If I use Postman to send the request without any Accept header or Accept: application/json it returns the json results.
If I add an Accept: text/html header to the request in Postman it sends me the PlayPen in html.
You can also see in their documentation that they use the appropriate Accept: application/json
I suggest you use Postman to test and play with the API's.

How to add content-language for a single page in http header

My index page is tri-lingual... in this scenario, W3 informs us that the original 'ID solution' was dropped, without a replacement......
W3 does suggest the use of HTTP headers, but fails to explain how this is accomplished.
Can stackoverflow solve this problem?
Background
W3 suggests that this code is not good/should not be used:
<meta http-equiv="Content-Language" content="de, fr, en">
However, they then say that there is nothing to replace it:
One implication of HTML5 dropping the meta element for declaring
language is that there is now no obvious way to provide metadata about
the document inside the document itself.
That's a painful statement, but... they then go on to suggest that "content-language" should be specified in a HTTP header.
This information is associated with a particular page by settings on
the server or by server-side scripting.
Fantastic... they even show a typical example... great!
HTTP/1.1·200·OK
Date:·Sat,·23·Jul·2011·07:28:50·GMT
Server:·Apache/2
Content-Location:·qa-http-and-lang.en.php
Vary:·negotiate,accept-language,Accept-Encoding
TCN:·choice
P3P:·policyref="http://www.w3.org/2001/05/P3P/p3p.xml"
Connection:·close
Transfer-Encoding:·chunked
Content-Type:·text/html; charset=utf-8
Content-Language:·en
But where is this file... and why is this character "·" used?
Why not use comma separated en, fr, de ?
Rant (after hours of researching):
If website programmers are advised not to use in-doc programming, it would be better if we were told exactly how to edit the HTTP header for any given page.
Therefore the question is simple?
Using CPanel, or Filezilla (and perhaps notepad++)... How do I modify the HTTP header for index.html to show that it contains English, French, German?
Note: I am currently using the bad code PLUS 'lang tags' eg:
<li lang="fr">
I'm trying to do what is right, but after looking on 'HTTP header help-sites', I never once found a statement re:
Exact file location
Filename and extension
Can anybody help solve this mystery?
If you didn't manage to find this, the HTTP Headers are what you are after as they describe the language you are expecting your target audience to use, and it can be multiple languages. HTTP Headers are set on your web server and apply to every page in your website.
If you are using Apache find the .htaccess file and add something along the lines of:
Header set Content-Language "en"
If you are using IIS then:
navigate to your website in the IIS GUI
double-click 'Http Response headers'
click 'Add...'
the name is Content-Language, the value is the language you want to use, for example use en for any kind of English, use commas to seperate multiple
Click OK
I got most of my info from here:
https://www.w3.org/International/questions/qa-html-language-declarations#metadata
Here is a list of the subtags you can use:
http://www.iana.org/assignments/language-subtag-registry/language-subtag-registry
thickguru supplied the .htaccess solution above, many thanks, his answer is here:
Language not declared Ideally

What are the most used Accept-Language in HTTP header?

I make website and want to use the Accept-Language in HTTP header to help visitor find their language. However, I have a hard time to find statistics about the use of Accept-Language.
Will most visitor have something set as their Accept-Language? Some places it is written things like "most modern browsers support Accept-Language", but do anyone have overview of which specific browser versions that support it? And will usually browser language be set as Accept-Language by default if the user don't actively change their own Accept-Language settings? I guess most people don't change these settings, but that doesn't mean that Accept-Language is left blank?
Do anyone have statistics for the most used language codes set inside Accept-Language? I can make mapping system to map them with my site languages, but I also have problem to find some good statistics about most used codes. It would help a lot to get the overview for how to make this work better!
Thanks in advance!
Browsers send an Accept-Language header field out of the box. By default, the same language is requested, that is used for the user interface of the browser.
As Oswald said, by default browsers set this to the language used by the browser UI. So no, the default setting isn't blank, it's something like "en-US,en".
The only figures that I have found are on https://panopticlick.eff.org/results?#fingerprintTable . That page tests for the amount of information contained in HTTP requests. On the test result page, after clicking on "Show full results for fingerprinting", for various pieces of information it shows their frequency in column "one in x browsers have this value".
In row "HTTP_ACCEPT Headers" it shows the frequency of a combination of some Accept header values given by the browser. For example, it says that one in 5.25 browsers send the value "text/html, /; q=0.01 gzip, deflate, br en-US,en;q=0.5". Unfortunately, that value seems to be the concatenation of the values of headers "Accept" (somewhat stripped), "Accept-Encoding", and "Accept-Language", with a "br" thrown in for good measure.
As I wrote, when I experimented with panopticlick, it showed "one in 5.25 requests" for "en-US,en". This value is one of the more common ones, if not the most common one. One in 295.2 requests had just "en-US", one in 547.18 requests had just "en" and one in 1076.94 requests had "en,en-US" (which should have the same effects as "en", so it does not really make sense to use it).
Varying only the configuration of accepted languages, we can infer the frequency of the languages as seen by panopticlick. A more direct way would of course be to simply write to them and ask them for a table. I'm sure that the sample set of panopticlick is not representative for the entire internet, but at least it's a start.

Scrappy response different than browser response

I am trying to scrape a this page with scrapy:
http://www.barnesandnoble.com/s?dref=4815&sort=SA&startat=7391
and the response which I get is different than what I see in the browser. Browser response has the correct page, while scrapy response is:
http://www.barnesandnoble.com/s?dref=4815&sort=SA&startat=1
page. I have tried with urllib2 but still have the same issue. Any help is much appreciated.
I don't really understand the issue, but usually a different response for a browser and scrapy is caused by one these:
the server analyzes your User-Agent header, and returns a specially crafted page for mobile clients or bots;
the server analyzes the cookies, and does something special when it looks like you are visiting for the first time;
you are trying to make a POST request via scrapy like the browser does, but you forgot some form fields, or put wrong values
etc.
There is no universal way to determine what's wrong, because it depends on the server logic, which you don't know. If you are lucky, you will analyze and fix all the mentioned issues and will make it work.

Saving Images without knowing Mime Type

I am using http://img.tweetimag.es/ to pull twitter avatars.
EX:
http://img.tweetimag.es/i/joestump_o
How do I know the correct mime type (jpg, png, gif, etc.) to save locally?
Any help would be great, thanks.
JP
Look at the content type of the HTTP GET request. For your example:
Content-Type: image/png
Here are some values you can retrieve, as mentioned in #kgiannakakis answer, take a look at Content-Type:
http://img694.imageshack.us/img694/2144/unledofv.png