Due to language adaptation I need to place some "special" chars in a custom header (chars like é, á, í, ç, and others)...
On the server side i'm using ASP.NET MVC.
It all works fine on chrome.
But in Safari... I can't figure out witch encoding safari uses...
I tried:
UTF-8,
UTF-16,
ASCII,
Url Encode,
a few ISO's
but alert(headerValue) always returns crazy chars...
can anyone tell me which encode to use?
There was a specification in the past regarding HTTP header encoding: RFC 2047. But it seems not to be implemented anymore and even removed.
Here are some related links:
What character encoding should I use for a HTTP header?
HTTP headers encoding/decoding in Java
https://bugzilla.mozilla.org/show_bug.cgi?id=601933
In your case, perhaps you could use URL-encoded string for the value of this custom header.
Hope it helps you,
Thierry
Related
I am trying to receive data from an external API which contains data in Greek language from my express backend however when I print the response I only see � instead of the Greek letters. I have tried sending different charsets in the API call but to no avail.
Is there something I am missing? I don't have any experience in handling different languages
I looked at the response headers and found out that it was using windows-1253 encoding so I used a library windows-1253 to decode the response.
My bad I didn't thought of looking at the response headers earlier
OWASP says "Escaping untrusted HTTP request data based on the context in the HTML output (body, attribute, JavaScript, CSS, or URL) will resolve Reflected and Stored XSS vulnerabilities" and "Applying context-sensitive encoding when modifying the browser document on the client side acts against DOM XSS" but how to differentiate between Escaping and Encoding? Another website says that Escaping is a subset of Encoding. I'm just confused between the two.
While using Google Cloud HTTPS Load Balancer we hit the following bug. Couldn't find any information on it.
We have a custom http header in our request:
X-<Company name>-abcde. If we are working directly against the server all is good, but once we are working through the load balancer, than our custom header is missing. We didn't find any reference in the documentation that there is a need to white list our headers or something like that.
Why my custom header is not being transferred to my backend server while working through Google Cloud Load Balancer? And how to make it work?
Thanks
Data
After a lot of testing, these are the results I've come up with:
The Google Cloud HTTPS Load Balancer does transfer custom HTTP headers to the backend service.
However, it changes them to lower-case.
So, in your case, X-Custom-Header is transformed to x-custom-header.
Solution
Simply change your code to read the lower-case version of your custom HTTP header. This is a simple fix, but one which may not be supported in the long-term by Google (there's not a word on this in Google's documentation so it's subject to change with no notice).
Petition Google to change this idiosyncratic behaviour or at the very least mention it clearly in their documentation.
A little extra
As far as I know, the RFC 2047 which specified the X- prefix for custom HTTP headers and propagated the pseudo-standard of a capital letter for each word has been deprecated and replaced by RFC 6648 which recommends against the X- prefix in general and mentions nothing regarding the rest of the words in the custom HTTP header key name. If I were Google, I would change this behaviour to pass custom HTTP headers as is and let developers deal with the strings as they've set them.
The RFC (RFC 7230) for HTTP/1.1 Message Syntax and Routing says that header fields have a case-insensitive field name. If you're relying on case to match the header that doesn't align with the RFC.
Way back in the day I looked through either the Tomcat of Jetty source and they worked with everything as a .toLower().
Go has a CanonicalMIMEHeaderKey where it'll format the headers in a common way to be sure everything is on the same page.
Python still harkens back to the RFC822 (hg.python.org/cpython/file/2.7/Lib/rfc822.py#l211) days, but it forces a .lower() on headers to standardize.
Basically though what the GCP HTTP(S) Load Balancer is doing is acceptable as far as the RFC is concerned.
This is most likely an application bug.
As other answers have stated, HTTP header names are case insensitive. Ime, every time headers appear to be case sensitive, it is because there is a request wrapper somewhere in the application call stack.
Request wrappers like this are common (usually necessary) in Java servlet filters. It's a common, newbie mistake to use case-sensitive matching (e.g. a regular Java HashMap<String, T>()) for the header names in the wrapper.
That's where I would start looking for your bug.
A reasonable way to create a Java Map<String, T> that is both case insensitive and that doesn't modify the keys is to use new TreeMap<String, T>( String.CASE_INSENSITIVE_ORDER ).
I'm making an ajax call to a rest API and specified the following header an http post request.
Content-Type application/json; charset=UTF-8
My post body contains some japanese/chinese characters.
Now what my question is, do I need the encode the body of the post request with UTF-8 encoding or the browser takes care of encoding?
When your Content-Type header declares UTF-8 charset, then you must send the content in the UTF-8 encoding.
Although browsers sometimes "guess" or "fix" the encoding, you should never rely on this, as this is a very fragile logic that often fails to work properly.
If your Chinese/Japanese content was in a different encoding (like Shift-JIS), then you will have to convert the text with library like iconv.
Alternatively you could declare that other encoding in the header, but note that you can use only single encoding for all of the response body. Converting everything to UTF-8 is usually the best solution.
I have a really weird WCF problem here...
We're connecting to a crappy third-party web service; it was a nightmare to even get it going, we had to create a custom WCF binding since those guys decided to use "ISO-8859-1" as their text encoding (instead of UTF-8 like everyone else on the web), and the other settings were messy, too - and not documented anywhere, of course...
It's been working ok for a while now, but suddenly, some of our data coming back in mangled up. We expect to get back names of places, and being in Switzerland, some of those have German umlauts in them. But for the past two or three months, we suddenly get back
Hünibach
instead of the proper
Hünibach
So the ü (u umlaut) is mangled.
No problem, I figured they finally switched to UTF-8, and I changed my custom binding to use UTF-8 as its text encoder instead of ISO-8859-1 - but no luck - no I'm getting:
EXCEPTION: System.ServiceModel.Security.MessageSecurityException
The HTTP request was forbidden with client authentication scheme 'Basic'.
What the f????? The service is protected by a username/password which we pass in using the ClientCredentials of WCF. Seems that changing the text encoding somehow messes up the credentials !?!?! Weird.....
OK - back to ISO-8859-1, and I just tried to interpret the response payload as UTF-8 - again no luck :-( Tried with UTF-16, UTF-32, UTF-7 even, Unicode, BigEndianUnicode - all to no avail.
So how on earth do I get back my proper umlauts, and still be able to call that bloody service... works just fine in SoapUI, btw.....
Any ideas?? I'm desperately grasping at any straws you might throw me!!
Try inspecting the data you are getting back and see what numeric codes they are using to represent it. Umlaut is one of those characters in 8859-1 that shares code with other characters.
See second para in - http://en.wikipedia.org/wiki/%C3%9C#Typography
Actually, I finally figured out what the trouble was.
For some reason, changing the sample CustomTextEncoder (provided by Microsoft in the WCF & WF samples) to use UTF-8 instead of ISO-8859-1 doesn't work.
On the other hand, ripping out the custom text encoder from my custom binding and just using the standard TextMessageEncoder that WCF provides from the get go (which uses UTF-8 by default) did work.
Don't ask me why.... that's just the facts I found.....