UTF8 characters come out as windows-1252, ExpressJS running on Electron - express

A little bit of introduction - the (wannabe) software I'm working on is a desktop application built on Github Electron. It is designed to have two modules - a UI and a backend with an API, the second "living" in a hidden browser window and running on ExpressJS. The idea is for the UI and the API to communicate over HTTP so they could be decoupled if needed. All files are in UTF-8.
Now the problem itself - the main API route looks like:
router.get('/', (request, result) => {
let message = 'Здрасти, коко!';
console.log('Answering with ' + message);
result.json(message);
});
When being called (from a browser, or Postman, or whatever), the answer looks like this:
{"message":"ЗдраÑти, коко!"}
...with those headers:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 64
ETag: W/"40-3JawFDiTNEinvN6xFO6T9g"
Date: Tue, 20 Dec 2016 06:47:53 GMT
Connection: keep-alive
Using the 2cyr tool I found out that the source encoding is UTF-8 but it gets properly displayed only as windows-1252 which confuses me a lot.
To narrow down the possibilities, I added the console.log() to the route handler and (a bit surprising to me) I'm getting the same "broken" result in the Chromium debugger. I suspected the file's encoding, but this is what I get about it:
Petars-Mac:api petar$ file -I api.js
api.js: text/x-c++; charset=utf-8
The last thing that came to my mind was actually decoupling the API from Electron. When I run it in the terminal with node, I actually get the proper result - both in the log message in the terminal and in the JSON answer in the browser.
What am I doing wrong and what further debugging could I possibly do?

So here we go, right before posting an issue in the Electron repository - it's the most stupid error I could imagine ever making in this situation.
TL;DR:
<meta charset="utf-8">
What I thought was that opening a second browser window for the backend and putting some JavaScript that runs in it would be enough. What I forgot was that it actually remains a browser window and therefore it needs just a little tiny bit of HTML to let it know that it serves UTF-8 content.
Maybe it's not me, maybe I was right expecting Express to serve UTF-8 over HTTP but nope. Anyway, it all works now.

Related

Spring Boot, Apache CXF 3.2.5 with MTOM sends empty attachment

I'm having a weird issue with Apache CXF and large (375MB) MTOM attachments are empty.
Running it locally in Eclipse produces the desired results, but deploying it to our server just gives an empty attachment.
The server is written in .NET and doesn't support chunking. With Chunking enabled the client works, but when i disable cunking it fails.
Sadly i'm unable to debug on the server, so i'm restricted to trace logging.
I've tried every trick i've been able to google.
Disable schema validation (CXF-4551) (CXF-7758)
Manually copying the file to java.io.tmpdir before sending, to ensure it can be read.
Custom DataSource
Disable WS-Security
Disable logging interceptor
Nothing seems to make a difference.
Every run i just get something like the following
</soap:Body></soap:Envelope>
--uuid:40ef745b-ac3c-4013-bbe7-a9cc28880423
Content-Type: application/xml
Content-Transfer-Encoding: binary
Content-ID: <7611ca0a-22f8-4637-b4f7-a5dfe7f20b81-3#www.somewhere.dk>
Content-Disposition: attachment;name="32_2018-03-28_output.xml"
--uuid:40ef745b-ac3c-4013-bbe7-a9cc28880423
Trying with a smaller (2KB) file on the server works just fine. A 75MB file gets attached correctly, but results in a HTTP 400 from the receiver (which i suspect is because the file is not fully transferred)
Does anyone have any ideas as to what might be causing this ?
After much trial & error, i finally managed to "solve" this. I enabled schema validation, and the data now appears. This is the exact issue that both bugs in my original question claims to fix.
Client client = ClientProxy.getClient(port);
BindingProvider bp = ((BindingProvider) port);
bp.getRequestContext().put("schema-validation-enabled", "true");
I can't add a comment so I'm posting this as an answer.
Jimmy could you perhaps comment on the latest CXF issue and provide some more details? Which version of CXF, what kind of client you are using, real code samples ideally, client logs?

Express.js: how to get assets gzipped

I use compress() middleware, put it the first in configure().
app.configure('all', function(){
app.use(express.compress());
...
app.use(express.static('public'), { maxAge: oneMonth });
})
How do I check that my content is gzipped? I've got a fricking strange situation:
1) On my dev machine: I reqeust localhost:4000/mystyle.css - DON'T see Content-encoding: gzip
2) When I deploy it on production if I request the file it self mydomain.com/mystyle.css - I SEE there see Content-encoding: gzip
3) I request mydomain.com and see in Network in chrome dev tools, find there mystyle.css and there I DON'T see Content-encoding: gzip
4) I use different services to check if my content is gzipped some says that it IS, some that it IS NOT.
WTF? Can some one explain?
Your issue is your use of app.configure. This is largely deprecated, but you're specifically using it such that you're looking for an all environment.
The documentation explains: "This method remains for legacy reason, and is effectively an if statement as illustrated in the following snippets."
Instead, just use the app.use without wrapping them in a configure statement.

Error consuming WCF RestFull POST call, works on W2003 Server / IIS6 but cannot manage to make it work on W2008 IIS7

I have a WCF service that exposes some methods with WebInvoke and POST method, this has been working on a Windows 2003 server machine for some time now, the thing is that I have to migrate the service to a new windows 2008 server machine and this is where I'm getting the issue, I get error 400 Bad Request when I try to call from the client. The code deployed into both machines is the same, I have been trying to perform the calls with both a UnitTest VS2008 solution and Fiddler2, the very same request returns a 200 OK response from the w2003 server but the 400 on the 2008 one.
I had another WCF service based on the same philosophy also working on the 2003 server machine so I tried to deploy it on the 2008 server, I found out that GET methods worked correctly on both machines but If I try to consume a very simple POST method, then I get an ERROR 500. Internal Server Error pointing o an "Object reference not set to an instance of an object" error. The thing is that also this method is working If I make the call to the other server, the call is exactly the same, only changes the IP, so I think that there must be some problem with IIS 7 that I can't figure out since the code is working perfectly on w2003/IIS6. I checked for .NET 3.5 installed and all the stuff I found on the forums.
I guess I should post some code but since it's working on the W2003 machine I'm not quite sure It would make any difference, I tried all sort of things with bindings but no luck, I think It should not be necessary since I'm not trying to generate any proxy with svctool or VS2008, I just format an URL request and send it to the server, as stated before it's working flawlessly on the first machine...
Here is an example of a request hand-made with fiddler2 (on the unit tests I use the WebRequest class to generate the request object with the same result):
POST http://XX.XXX.XXX.XXX/ServiceiPhone/service.svc/DoWork HTTP/1.1
Host: XX.XXX.XXX.XXX
User-Agent: FoodLinker/1.0
Accept: text/xml
Content-Type: application/json
Accept-Language: es-es
Accept-Encoding: gzip, deflate
Content-Length: 17
Connection: keep-alive
{"Data":"MyData"}
The code on the server looks like this:
[WebHelp(Comment = "Sample description for DoWork")]
[WebInvoke(UriTemplate = "DoWork")]
[OperationContract]
public SampleResponseBody DoWork(SampleRequestBody request)
{
//TODO: Change the sample implementation here
return new SampleResponseBody()
{
Value = String.Format("Sample DoWork response: '{0}'", request.Data)
};
}
There is no special binding configuration on the web.config file, as stated before it's working on the previous machine but no on the other one.
This is for the test project, the code I really want to deploy is a little bit more complex but its a similar scenario and I think If I can manage to solve this probably I will with the other one.
Any ideas would be very welcome,
thanks in advance
Ok, so I finally figured it out, it was an issue with .NET Framework 3.5, installing the .NET Framework 3.5 SP1 did the trick, so if you ever encounter this problem its worth trying to update and see if it was the problem as in this case.

Custom JSON IErrorHandler in WCF returning StatusCode 200/504 when should return 400

I have a WCF service that among other bindings also uses WebHttpBinding for JSON inputs/results.
I made a custom IErrorHandler implementation in order to be able to set the StatusCode to 400 when something goes wrong and also return a JSON understandable message. It´s the straight implementation that you can find everywhere (nice way described here).
My problem is: when I test it locally using Visual Studio Web Development Server (Cassini) it works perfectly. However, when I deploy it to my test server (Windows 2008 with standard config for IIS and everything else) it does not work.
When I call it and debug with Firebug I get a HttpStatusCode 200 as a return and no response text. With Fiddler I get a HttpStatusCode 504 and no return at all. However, the behavior I expected (and what happens locally) is a call to the error callback of the ajax call with the responseText set.
I debugged it remotely and everything looks just fine. The execution pipeline is OK and all the classes are called as they should be just like they are locally, except it does not work.
Any suggestions? I´m pretty much out of options here to figure this out.
Thanks a lot!
if firebug and fiddler are giving different results, what happens if you telnet to it directly and perform a request (Something like:)
GET /VirtualDirectoryAndGetData HTTP/1.1
HOST: example.com
[carriage return]
It wouldn't surprise me if you're somehow getting odd headers/formatting back (to explain why firebug/fiddler disagree)
Another thing to test would be publishing to your dev machine to see if it's a machine-specific issue or a server vs dev webserver issue.
If it's happening anywhere outside VS, you might also try commenting out the lines where you set
rmp.StatusCode = System.Net.HttpStatusCode.BadRequest;
rmp.StatusDescription = "Bad request";
This may indicate whether it's a response code issue or an error handler issue.
If you can edit your question to include the results (with sensitive info removed), we'll see if we can track it down further.
Edit: after looking at the question again, it may well be that the server is erroring before it can send ANY response. FF might assume 200 by default, whereas ie might assume 504 (Gateway Timeout). This is total speculation but is possible. Do you see anything in the event logs?
I had a similar issue which I was able to solve. Take a look at the IIS settings. Details on how I overcame the issue are in this post: IErrorHandler returning wrong message body when HTTP status code is 401 Unauthorized

Background Intelligent Transfer Service and Amazon S3

I'm using SharpBITS to download file from AmazonS3.
> // Create new download job. BitsJob
> job = this._bitsManager.CreateJob(jobName, JobType.Download);
> // Add file to job.
> job.AddFile(downloadFile.RemoteUrl, downloadFile.LocalDestination);
> // Resume
> job.Resume();
It works for files which do no need authentication. However as soon as I add authentication query string for AmazonS3 file request the response from server is http state 403 -unauthorized. Url works file in browser.
Here is the HTTP request from BIT service:
HEAD /mybucket/6a66aeba-0acf-11df-aff6-7d44dc82f95a-000001/5809b987-0f65-11df-9942-f2c504c2c389/v10/summary.doc?AWSAccessKeyId=AAAAZ5SQ76RPQQAAAAA&Expires=1265489615&Signature=VboaRsOCMWWO7VparK3Z0SWE%2FiQ%3D HTTP/1.1
Accept: */*
Accept-Encoding: identity
User-Agent: Microsoft BITS/7.5
Connection: Keep-Alive
Host: s3.amazonaws.com
The only difference between the one from a web browser is the request type. Firefox makes a GET request and BITS makes a HEAD request. Are there any issues with Amazon S3 HEAD requests and query string authentication?
Regards, Blaz
You are probably right that a proxy is the only way around this. BITS uses the HEAD request to get a content length and decide whether or not it wants to chunk the file download. It then does the GET request to actually retrieve the file - sometimes as a whole if the file is small enough, otherwise with range headers.
If you can use a proxy or some other trick to give it any kind of response to the HEAD request, it should get unstuck. Even if the HEAD request is faked with a fictitious content length, BITS will move on to a GET. You may see duplicate GET requests in a case like this, because if the first GET request returns a content length longer than the original HEAD request, BITS may decide "oh crap, I better chunk this after all."
Given that, I'm kind of surprised it's not smart enough to recover from a 403 error on the HEAD request and still move on to the GET. What is the actual behaviour of the job? Have you tried watching it with bitsadmin /monitor? If the job is sitting in a transient error state, it may do that for around 20 mins and then ultimately recover.
Before beginning a download, BITS sends an HTTP HEAD request to the server in order to figure out the remote file's size, timestamp, etc. This is especially important for BranchCache-based BITS transfers and is the reason why server-side HTTP HEAD support is listed as an HTTP requirement for BITS downloads.
That being said, BITS bypasses the HTTP HEAD request phase, issuing an HTTP GET request right away, if either of the following conditions is true:
The BITS job is configured with the BITS_JOB_PROPERTY_DYNAMIC_CONTENT flag.
BranchCache is disabled AND the BITS job contains a single file.
Workaround (1) is the most appropriate, since it doesn't affect other BITS transfers in the system.
For workaround (2), BranchCache can be disabled through BITS' DisableBranchCache group policy. You'll need to do "gpupdate" from an elevated command prompt after making any Group Policy changes, or it will take ~90 minutes for the changes to take effect.