I am having one webservice to sync order data from Ipad offline app to live server. Sometimes webservice is working fine but sometimes it is not.
So when I am trying to call that webservice to resolve the issue through url using Postman, I am getting below error.
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html>
<head>
<title>414 Request-URI Too Long</title>
</head>
<body>
<h1>Request-URI Too Long</h1>
<p>The requested URL's length exceeds the capacity
limit for this server.
<br />
</p>
<hr>
<address>Apache/2.4.7 (Ubuntu) Server at ip-172-31-31-143.ap-southeast-1.compute.internal Port 80</address>
</body>
</html>
Please provide me the solution. Thanks in advance.
The Web server (running the Web site) thinks that the HTTP data stream sent by the client (e.g. your Web browser or our CheckUpDown robot) contains a URL that is simply too large i.e. too many bytes.
Typically Web servers set fairly generous limits on length for genuine URLs e.g. up to 2048 or 4096 characters. If your URL is particularly long, you can usually try shorter variations to see roughly where the limit is. If your long URL is indeed valid, then the Web server may need to be reconfigured to allow your URLs through. Understand that Web servers have to set some reasonable limit here, because they have to deal with badly programmed clients trying to give them huge garbage URLs.
Fixing 414 errors - general
This error seldom occurs in most Web traffic, particularly when the client system is a Web browser. The URLs in this case are typically standard hyperlinks found on Web pages. These links tend to be too large if they are simply wrong i.e. the Web page containing the link has been badly coded.
If your client system is not a Web browser, the problem can only be resolved by examining what the client is trying to do then discussing with your ISP why the Web server rejects the size of the URL sent by the client system.
Related
I'm currently having trouble with the W3C markup validation service https://validator.w3.org and the use of HTTPS. When I type in there the website address with https I get the following response:
Sorry! This document cannot be checked.
Together with an error 500 saying that it can't connect to the site. Also, on the website page I have one link which carries the person into the validation and shows the site has been validated. When clicking the link without HTTPS everything works, but with HTTPS I get one message
Sorry! This document cannot be checked. No Referer header found!
which I believe is because the secure connection doesn't send the referer header right?
Now, how can I use HTTPS and avoid these problems with the validation?
Please always directly use https://validator.w3.org/nu/ (the current W3C HTML Checker) instead of https://validator.w3.org/ (the legacy W3C Markup Validator).
The HTML Checker is able to check documents at https URLS just fine. So If you find a https site that it doesn’t work with as expected, then that’s likely a bug I need to fix. (I maintain the checker, and recently updated it to get HTTPS support using HTTP Components HttpClient 4.4 —the latest Apache HTTP client library—including full support for HTTPS sites that use SNI.
A note about which W3C tool to use for checking HTML documents
On the W3C backend, when you use the https://validator.w3.org/ legacy Markup Validator to check documents with <!DOCTYPE html> doctypes, it just hands off the request to the same backend that directly drives the https://validator.w3.org/nu/ HTML Checker. But the HTML Checker has a UI with more features, and using it from https://validator.w3.org/nu/ is faster.
We (the W3C) plan to swap those two around eventually—that is, move the current HTML Checker to https://validator.w3.org/ and move the legacy Markup Validator to https://validator.w3.org/legacy/ or some such—but it will be a while yet before that happens. So in the mean time, as I said, I suggest always just doing all your HTML checking from the https://validator.w3.org/nu/ site.
There seems to be a bug in the W3C NU validator, so the "referer" value is not processed fully. :-/
I.e. the code for their badge <a target="_blank" href="http://validator.w3.org/check/referer"><img src="http://www.w3.org/Icons/valid-xhtml10" alt="Valid XHTML 1.0 Transitional" title="Valid XHTML 1.0 Transitional" style="height: 31px; width: 88px;" /></a>
does not validate my nested sub-page, but just the root-page of the whole web-site instead, on click on the badge, in a footer of the deep sub-page. Sad. :-/
And the same for the alternative parameterized .../check?uri=referer" URL, still the same issue. :-/
This problem is driving me nuts. Our web app uses HTTP POST to login users and now IE 10 is aborting the connection and saying:
SCRIPT7002: XMLHttpRequest: Network Error 0x2f7d, Could not complete the operation due to error 00002f7d.
Here are all the details I have
IE version 10.0.9.16618, update version 10.0.6. I've also reproduced this on IE version 10.0.9200.16635, update version 10.0.7.
The domain is using HTTPS. The problem doesn't occur on HTTP connections
I've read that for some reason IE needs to get a certificate before it can do an HTTP POST, so I have HTTP GETs running before my POST request, but now the GET is erroring out. See network flow screen shot. The GET is super simple, just a PING page that returns "I'm up."
Asyn is turned off $.ajax({type: 'POST',url: url,async: false...}); I've read in other posts that this matters.
The certificate is good, see screen shot.
The problem goes away if the site is added as a "trusted site" but that's not really the user experience we're shooting for.
This just started about a month ago. Did Microsoft push some new updates recently?
I've already read: http://social.msdn.microsoft.com/Forums/windowsapps/en-US/dd5d2762-7643-420e-880a-9bf75554e383/intermittent-xmlhttprequest-network-error-0x2f7d-could-not-complete-the-operation-due-to-error. It doesn't help.
Screen shots:
Network flow:
Cert is good:
Any help is greatly appreciated. I've spent a lot of hours on this with no luck. As you would expect this works fine in Chrome and Firefox. If you need any more detail about what's happening please let me know.
Thanks,
Certificate revocation checks may block the initial JSON POST, but allow subsequent requests after the GET callback
We recently determined that URLMon's code (Win8, Win7, and probably earlier) to ignore certificate revocation check failures is not applied for content uploads (e.g. HTTP POST). Hence, if a Certificate Revocation check fails, that is fatal to the upload (e.g. IE will show a Page Cannot Be Displayed error message; other clients would show a different error). However, this rarely matters in the real world because in most cases, the user first performs a download (HTTP GET) from the target HTTPS site, and as a result the server's certificate is cached with the "ignore revocation check failures" exemption for the lifetime of the process and thus a subsequent POST inherits that flag and succeeds. The upload fails if the very first request to the HTTPS site in the current process was for an upload (e.g. as in a cross-origin POST request).
Here is how it works:
A little background: When a web browser initiates a HTTPS handshake with a web server, the server immediately sends down a digital certificate. The hostname of the server is listed inside the digital certificate, and the browser compares it to the hostname it was attempting to reach. If these hostnames do not match, the browser raises an error.
The matching-hostnames requirement causes a problem if a single-IP is configured to host multiple sites (sometimes known as “virtual-hosting”). Ordinarily, a virtual-hosting server examines the HTTP Host request header to determine what HTTP content to return. However, in the HTTPS case, the server must provide a digital certificate before it receives the HTTP headers from the browser. SNI resolves this problem by listing the target server’s hostname in the SNI extension field of the initial client handshake with the secure server. A virtual-hosting server may examine the SNI extension to determine which digital certificate to send back to the client.
The GET may be victim of the operation aborted scenario:
The HTML file is being parsed, and encounters a script block. The script block contains inline script which creates a new element and attempts to add it to the BODY element before the closing BODY tag has been encountered by the parser.
<body>
<div>
<script>document.body.appendChild(newElem)</script>
</div>
</body>
Note that if I removed the <div> element, then this problem would not occur because the script block's immediate parent would be BODY, and the script block's immediate parent is immune to this problem.
References
Understanding Certificate Revocation Checks
Client Certificates vs Server Certificates
Understanding and Managing the Certificate Stores
Preventing Operation Aborted Scenarios
HTTPS Improvements in IE
Online Certificate Status Protocol - OCSP
[SOLVED]
I only observed this error today. for me the Error code was different though.
SCRIPT7002: XMLHttpRequest: Network Error 0x2efd, Could not complete
the operation due to error 00002efd.
I was occuring randomly and not all time. but what it noticed is, if it comes it comes for subsequent ajax calls.. so i put some delay of 5 seconds between the ajax calls and it resolved.
Also the CORS must be configured on your web server.
I had the same exact issue and I just finally resolved it. For some reason I got the same error that you were receiving on IE when connecting to the API using OWIN middleware that was used to receive login credentials. It seemed to work fine while connecting to any other sort of API though. For some reason it didnt like cross domain request even though I had CORS enabled server side on the API.
Anyways I was able to resolve the issue using the xdomain library. Make sure you load this script before loading any other javascript.
First create a proxy.html page on the root of your API server and add this code. Replace placeholder URL.
<!DOCTYPE HTML>
<script src="//cdn.rawgit.com/jpillora/xdomain/0.7.3/dist/xdomain.min.js" master="http://insert_client_url_here.com"></script>
Now simply add this to your client replacing the placeholder URL pointing to the proxy.html page on your API server.
<script src="//cdn.rawgit.com/jpillora/xdomain/0.7.3/dist/xdomain.min.js" slave="http://Insert_Api_Url_Here.com/proxy.html"></script>
Adding a delay is not a proper solution.
This can be because the IE will treat it as an network error when the empty body request is made.
Try adding a empty class as the parameter in the server and IE should start working.
I got a link-sharing website hosted on linux, which allows users to share links. I purchased a digital certificate SSL 123 to host it in a secure https mode. But the problem is it won't reflect in the address bar as a normal https connection, it displays https with an error:
Your connection is encrypted with 128bit encryption...However, this page includes other resources which are not secure.
I enquired with my hosting company, but they say that because of my website, which allows people to share links, The 'THWATE' company can't verify the other links which have been shared in my website, that is why, it is displaying an error. I am confused as to whether continue with SSL or just host it normally, as I don't want my users to feel insecure about their info.
No the links are not checked but everything you embed in the page is. For example images, scripts that are linked using HTTP. The browser will need to download the linked scripts, images, ... and complains that these were retrieved using http.
This is no problem:
Some link
but these are (EDITED):
<img src="http://example.org/picture.png"/>
<script type="text/javascript"
src="http://pagead2.googlesyndication.com/pagead/show_ads.js">
</script>
Download the scripts using HTTPS
Try to use secure scripts for including the scripts of sharethis.com
Use this script source : https://ws.sharethis.com/button/buttons.js
http://support.sharethis.com/customer/portal/articles/475097-ssl-support
I'm having an issue with a friends iWeb website - http://www.africanhopecrafts.org. Rather than pages viewing they want to download instead but they're all html files. I've tried messing with my htaccess file to see if that was affecting it but nothings working.
Thanks so much
Most likely your friend's web site is dishing up the wrong MIME type. The web server might be malconfigured, but the page can override the content-type responde header by adding a <meta> tag to the page's <head> like this:
<meta http-equiv="content-type" content="text/html" charset="ISO-8859-1" />
(where the charset in use reflects that of the actual web page.)
If the page is being served up with the correct content-type, the browser might be malconfigured to not handle that content type. Does the problem occur for everybody, or just you? IS the problem dependent on the browser in use?
You can sniff the content-type by installing Firefox's Tamper Data plug in. Fire up Firefox, start TamperData and fetch the errant web page via Firefox. Examining the response headers for the request should tell you what content-type the page is being served up with.
I have a web app. IIS 6. .NET 3.5.
I have 2 websites on the web server. One of which is already correctly serving FLVs. The newer one is not.
I have added the MIME type information to the HTTP Headers in the website properties ['.flv', 'video/x-flv'] (as FLV is not an extension IIS recognises by default).
When I goto the URL, Firefox goes black and displays "Waiting for video". It stays like this. I have checked the logs that IIS writes to, and I have found the GET request, and the HTTP status associated with it, which is 302. This is a "Moved Temporarily" status code. I don't understand why it would be throwing this. All other content on this site (currently consisting of webpages and images) is returned fine.
I have tried the same video in the older site, and just directing FF to the URL, it runs correctly.
Any help as to why I can't do this would be much appreciated, thank you.