I'm making an Ajax.request to a remote PHP server in a Sencha Touch 2 application (wrapped in PhoneGap).
The response from the server is the following:
XMLHttpRequest cannot load http://nqatalog.negroesquisso.pt/login.php. Origin http://localhost:8888 is not allowed by Access-Control-Allow-Origin.
How can I fix this problem?
I wrote an article on this issue a while back, Cross Domain AJAX.
The easiest way to handle this if you have control of the responding server is to add a response header for:
Access-Control-Allow-Origin: *
This will allow cross-domain Ajax. In PHP, you'll want to modify the response like so:
<?php header('Access-Control-Allow-Origin: *'); ?>
You can just put the Header set Access-Control-Allow-Origin * setting in the Apache configuration or htaccess file.
It should be noted that this effectively disables CORS protection, which very likely exposes your users to attack. If you don't know that you specifically need to use a wildcard, you should not use it, and instead you should whitelist your specific domain:
<?php header('Access-Control-Allow-Origin: http://example.com') ?>
If you don't have control of the server, you can simply add this argument to your Chrome launcher: --disable-web-security.
Note that I wouldn't use this for normal "web surfing". For reference, see this post: Disable same origin policy in Chrome.
One you use Phonegap to actually build the application and load it onto the device, this won't be an issue.
If you're using Apache just add:
<ifModule mod_headers.c>
Header set Access-Control-Allow-Origin: *
</ifModule>
in your configuration. This will cause all responses from your webserver to be accessible from any other site on the internet. If you intend to only allow services on your host to be used by a specific server you can replace the * with the URL of the originating server:
Header set Access-Control-Allow-Origin: http://my.origin.host
If you have an ASP.NET / ASP.NET MVC application, you can include this header via the Web.config file:
<system.webServer>
...
<httpProtocol>
<customHeaders>
<!-- Enable Cross Domain AJAX calls -->
<remove name="Access-Control-Allow-Origin" />
<add name="Access-Control-Allow-Origin" value="*" />
</customHeaders>
</httpProtocol>
</system.webServer>
This was the first question/answer that popped up for me when trying to solve the same problem using ASP.NET MVC as the source of my data. I realize this doesn't solve the PHP question, but it is related enough to be valuable.
I am using ASP.NET MVC. The blog post from Greg Brant worked for me. Ultimately, you create an attribute, [HttpHeaderAttribute("Access-Control-Allow-Origin", "*")], that you are able to add to controller actions.
For example:
public class HttpHeaderAttribute : ActionFilterAttribute
{
public string Name { get; set; }
public string Value { get; set; }
public HttpHeaderAttribute(string name, string value)
{
Name = name;
Value = value;
}
public override void OnResultExecuted(ResultExecutedContext filterContext)
{
filterContext.HttpContext.Response.AppendHeader(Name, Value);
base.OnResultExecuted(filterContext);
}
}
And then using it with:
[HttpHeaderAttribute("Access-Control-Allow-Origin", "*")]
public ActionResult MyVeryAvailableAction(string id)
{
return Json( "Some public result" );
}
As Matt Mombrea is correct for the server side, you might run into another problem which is whitelisting rejection.
You have to configure your phonegap.plist. (I am using a old version of phonegap)
For cordova, there might be some changes in the naming and directory. But the steps should be mostly the same.
First select Supporting files > PhoneGap.plist
then under "ExternalHosts"
Add a entry, with a value of perhaps "http://nqatalog.negroesquisso.pt"
I am using * for debugging purposes only.
This might be handy for anyone who needs to an exception for both 'www' and 'non-www' versions of a referrer:
$referrer = $_SERVER['HTTP_REFERER'];
$parts = parse_url($referrer);
$domain = $parts['host'];
if($domain == 'google.com')
{
header('Access-Control-Allow-Origin: http://google.com');
}
else if($domain == 'www.google.com')
{
header('Access-Control-Allow-Origin: http://www.google.com');
}
If you're writing a Chrome Extension and get this error, then be sure you have added the API's base URL to your manifest.json's permissions block, example:
"permissions": [
"https://itunes.apple.com/"
]
I will give you a simple solution for this one. In my case I don't have access to a server. In that case you can change the security policy in your Google Chrome browser to allow Access-Control-Allow-Origin. This is very simple:
Create a Chrome browser shortcut
Right click short cut icon -> Properties -> Shortcut -> Target
Simple paste in "C:\Program Files\Google\Chrome\Application\chrome.exe" --allow-file-access-from-files --disable-web-security.
The location may differ. Now open Chrome by clicking on that shortcut.
I've run into this a few times when working with various APIs. Often a quick fix is to add "&callback=?" to the end of a string. Sometimes the ampersand has to be a character code, and sometimes a "?": "?callback=?" (see Forecast.io API Usage with jQuery)
This is because of same-origin policy. See more at Mozilla Developer Network or Wikipedia.
Basically, in your example, you to need load the http://nqatalog.negroesquisso.pt/login.php page only from nqatalog.negroesquisso.pt, not localhost.
if you're under apache, just add an .htaccess file to your directory with this content:
Header set Access-Control-Allow-Origin: *
Header set Access-Control-Allow-Headers: content-type
Header set Access-Control-Allow-Methods: *
In Ruby on Rails, you can do in a controller:
headers['Access-Control-Allow-Origin'] = '*'
If you get this in Angular.js, then make sure you escape your port number like this:
var Project = $resource(
'http://localhost\\:5648/api/...', {'a':'b'}, {
update: { method: 'PUT' }
}
);
See here for more info on it.
You may make it work without modifiying the server by making the broswer including the header Access-Control-Allow-Origin: * in the HTTP OPTIONS' responses.
In Chrome, use this extension. If you are on Mozilla check this answer.
We also have same problem with phonegap application tested in chrome.
One windows machine we use below batch file everyday before Opening Chrome.
Remember before running this you need to clean all instance of chrome from task manager or you can select chrome to not to run in background.
BATCH: (use cmd)
cd D:\Program Files (x86)\Google\Chrome\Application\chrome.exe --disable-web-security
In Ruby Sinatra
response['Access-Control-Allow-Origin'] = '*'
for everyone or
response['Access-Control-Allow-Origin'] = 'http://yourdomain.name'
When you receive the request you can
var origin = (req.headers.origin || "*");
than when you have to response go with something like that:
res.writeHead(
206,
{
'Access-Control-Allow-Credentials': true,
'Access-Control-Allow-Origin': origin,
}
);
Related
I have tried to include edge side tags <esi:include> in my nuxt project.
But it is not rendering the element when it is serving from varnish.
My vue file:
<div>
<Header/>
<esi:include src="http://widget-webapp/header-bar" />
<Footer/>
</div>
I have included the ESI tag in the ignoreElement in the nuxt config to ignore the ESI element warning.
vue: {
config: {
productionTip: false,
devtools: process.env.NODE_ENV === 'prod' || process.env.NODE_ENV === 'stage' ? false : true,
ignoredElements: ['esi:include']
}
},
I have seen that there is a library for React. Is there any type of library or way to include ESI tag in a vue SSR? Any sort of help will be appreciated. Thanks.
Vcl code for ESI:
if (bereq.http.host ~ "example\.com$") {
set beresp.grace = 24h;
# Enable ESI
if (beresp.http.content-type ~ "html|xml") {
set beresp.do_esi = true;}
Suggested approach
Keep in mind that Varnish doesn't parse ESI tags automatically. You need to instruct Varnish to do this for the right pages using VCL logic.
Have a look at https://www.varnish-software.com/developers/tutorials/example-vcl-template/#14-esi-support. It describes a common ESI configuration for Varnish that is part of our example VCL file.
This is the code:
sub vcl_recv {
set req.http.Surrogate-Capability = "key=ESI/1.0";
}
sub vcl_backend_response {
if (beresp.http.Surrogate-Control ~ "ESI/1.0") {
unset beresp.http.Surrogate-Control;
set beresp.do_esi = true;
}
}
Meaning of the Surrogate headers
What this snippet does is check whether the backend sent a Surrogate-Control header containing an ESI/1.O key prior to storing the object in the cache.
It allows the application to send a header like Surrogate-Control: content="ESI/1.0", which Varnish processes and uses to activate ESI processing.
However, the web application has no guarantees that there will be a caching server that supports ESI in front of it. That's why Varnish exposes a Surrogate-Capability header that announces the capabilities it has on the edge.
The header that Varnish sends is Surrogate-Capability: "key=ESI/1.0".
Using the Surrogate headers in your application
In your application you should check whether or not a Surrogate-Capability request header is sent that contains the term ESI/1.0. If that is the case, you can espose <esi:include src="/xyz" /> tags.
Just make sure you set the Surrogate-Control: content="ESI/1.0" response header so that Varnish knows that it should process them.
If the Surrogate-Capability request header is not set or doesn't contain the right terms, you shouldn't expose ESI tags and instead render that content on the web server, rather than on the edge.
Update: handling ESI with the VCL code you provided
Based on the VCL code you provided, ESI processing will only take place if the request host matches example.com and any value prefixing it.
I'm not sure if this is a redacted host name, but please make sure the Host header matches the one you send the request with.
The ESI parsing also happens when the Content-Type response header contains html or xml. Please also ensure the content is either HTML or XML.
How to debug?
The assumptions I described in the paragraph above are pretty standard and might not be 100% helpful in your situation. That's why the proper debugging commands could make sense.
Assuming that the homepage of your service contains ESI tags, the following varnishlog command can be used to debug.
Please adjust the URL to the one you're using and attach the output of the command to your question. This output can be used to figure out what's really going on.
sudo varnishlog -g request -q "ReqUrl eq '/'"
I'm trying to connect to a Magento instance using the SOAP API v2, and although I can see the wsdl when I visit http://www.domain.loc/api/v2_soap?type=soap&wsdl=1 in my browser, I am unable to execute the login() call, an error is always thrown.
Code:
$options = array(
'trace' => 1,
'cache_wsdl' => WSDL_CACHE_NONE,
//'location' => 'http://www.domain.loc/index.php/api/v2_soap?wsdl=1',
'soap_version' => SOAP_1_2,
'connection_timeout' => 120,
'exception' => 0,
'encoding' => 'utf-8',
);
try {
$client = new soapClient('http://www.domain.loc/api/v2_soap?type=soap&wsdl=1', $options);
$client->login('username', 'password');
//$result = $client->catalogProductList($sessionId);
//$result = $client->call($session, 'catalog_product.list');
var_dump($client);
} catch (SoapFault $e) {
var_dump($e);
}
Which yields "SoapFault: looks like we got no XML document"
When I uncomment the location option I get "SoapFault: Wrong Version"
When I view the wsdl file, I see the soap:address location set as
<port name="Mage_Api_Model_Server_HandlerPort" binding="typens:Mage_Api_Model_Server_HandlerBinding">
<soap:address location="http://www.domain.loc/index.php/?SID=7o7mn7iiu9tr8u1b9163r305d4dp1l1jrcn1hmnr34utgnhgb6i0&type=soap"/>
Which seems incorrect as it is the URL to the homepage, with the SID and a query param.
Magento version EE1.14
Things I have tried:
Disable local modules - no change
Using Zend_Soap_Client - no change
Various different URL configurations - tried everything under the
sun, no change
Using SOAP v1 - same results
Tried using this on a remote instance instead of my local - same
result
Checked phpinfo - yes SOAP is installed
Tried debugging - When I turn on Xdebug and run my test script it
seems to prevent the script from even running. Browser just
indicates loading forever
Tried to uncomment line in .htaccess to rewrite RewriteRule
^api/([a-z][0-9a-z_]+)/?$api.php?type=$1 [QSA,L] - No change
(yes mod rewrite is on and working)
Tried fetching wsdl and saving locally and passing it into SOAP
client constructor - no change
Anyone know how to troubleshoot this? I see several threads with these errors, but either unanswered or have tried their solution and it did not work in this case. Why is my location in the wsdl incorrect? Any help at all would be appreciated. Thanks!
I'm trying to pass my elasticsearch calls from NEST through Fiddler so I can see actual json requests and responses.
I've done the following to create my client, but the requests are not being pushed through the proxy (it doesn't matter if Fiddler is on or off, request still gets to elasticsearch).
ConnectionSettings cs = new ConnectionSettings(uri);
cs.SetProxy(new Uri("http://localhost:8888"),"username", "password");
elasticClient = new ElasticClient(cs);
Fiddler has no username/password requirements so I just pass random text.
I can confirm that at the point just before executing request my elasticClient has the proxy property filled in with Uri specified above, though with a trailing slash added by NEST.
Thanks
Okay, so, I gave up on the NEST proxy settings - they didn't seem to make any difference.
However, setting host on the NEST client to "http://ipv4.fiddler:9200" instead of localhost routes the call through Fiddler and achieved the desired result of allowing me to see both requests and responses from Elasticsearch.
If you want to see the requests a that .net application makes in fiddler you can specify the proxy in the web/app.config
As documented on fiddler's website
http://docs.telerik.com/fiddler/configure-fiddler/tasks/configuredotnetapp
<system.net>
<defaultProxy>
<proxy
autoDetect="false"
bypassonlocal="false"
proxyaddress="http://127.0.0.1:8888"
usesystemdefault="false" />
</defaultProxy>
</system.net>
Handy if changing the hostname to ipv4.fiddler is not an option.
Above code didn't help me.
So, here is my variant
var node = new Uri("http://localhost.fiddler:9200");
var settings = new ConnectionSettings(node)
.DisableAutomaticProxyDetection(false)
This should make it work:
var settings = new ConnectionSettings(...)
.DisableAutomaticProxyDetection(false);
See this answer.
Combining all the suggestions, the working solution is:
var node = new Uri("http://myelasticsearchdomain.com:9200");
var settings = new ConnectionSettings(node)
.DisableAutomaticProxyDetection(false)
.SetProxy(new Uri("http://localhost:8888"), "", "");
This works on NEST ver 7.6.1, and it's not necessary to toggle:
DisableAutomaticProxyDetection
var settings = new ConnectionSettings(...);
settings.Proxy(new Uri(#"http://proxy.url"), "username", "password");
I would like to mask the version or remove the header altogether.
To change the 'Server:' http header, in your conf.py file:
import gunicorn
gunicorn.SERVER_SOFTWARE = 'Microsoft-IIS/6.0'
And use an invocation along the lines of gunicorn -c conf.py wsgi:app
To remove the header altogether, you can monkey-patch gunicorn by replacing its http response class with a subclass that filters out the header. This might be harmless, but is probably not recommended. Put the following in conf.py:
from gunicorn.http import wsgi
class Response(wsgi.Response):
def default_headers(self, *args, **kwargs):
headers = super(Response, self).default_headers(*args, **kwargs)
return [h for h in headers if not h.startswith('Server:')]
wsgi.Response = Response
Tested with gunicorn 18
This hasn't been clearly written here so I'm gonna confirm that the easiest way for the latest version of Gunicorn (20.1.x) is to add following lines into configuration file:
import gunicorn
gunicorn.SERVER = 'undisclosed'
For newer releases (20.0.4): Create a gunicorn.conf.py file with the content below in the directory from where you will run the gunicorn command:
import gunicorn
gunicorn.SERVER_SOFTWARE = 'My WebServer'
It's better to change it to something unique than remove it. You don't want to risk, e.g., spiders thinking you're noncompliant. Changing it to the name of software you aren't using can cause similar problems. Making it unique will prevent the same kind of assumptions ever being made. I recommend something like this:
import gunicorn
gunicorn.SERVER_SOFTWARE = 'intentionally-undisclosed-gensym384763'
You can edit __init__.py to set SERVER_SOFTWARE to whatever you want. But I'd really like the ability to disable this with a flag so I didn't need to reapply the patch when I upgrade.
My mocky-patch free solution, involves wrapping the default_headers method:
import gunicorn.http.wsgi
from six import wraps
def wrap_default_headers(func):
#wraps(func)
def default_headers(*args, **kwargs):
return [header for header in func(*args, **kwargs) if not header.startswith('Server: ')]
return default_headers
gunicorn.http.wsgi.Response.default_headers = wrap_default_headers(gunicorn.http.wsgi.Response.default_headers)
This doesn't directly answer to the question but could address the issue as well and without monkey patching gunicorn.
If you are using gunicorn behind a reverse proxy, as it usually happens, you can set, add, remove or perform a replacement in a response header coming downstream from the backend. In our case the Server header.
I guess every Webserver should have an equivalent feature.
For example, in Caddy 2 (currently in beta) it would be something as simple as:
https://localhost {
reverse_proxy unix//tmp/foo.sock {
header_down Server intentionally-undisclosed-12345678
}
}
For completeness I still add a minimal (but fully working) Caddyfile to handle Server header modification even in manual http->https redirect process (Caddy 2 does it automatically, if you don't override it), which could a bit tricky to figure it out correctly.
http://localhost {
# Fact: the `header` directive has less priority than `redir` (which means
# it's evaluated later), so the header wouldn't be changed (and Caddy would
# shown instead of the faked value).
#
# To override the directive ordering only for this server, instead of
# change the "order" option globally, put the configuration inside a
# route directive.
# ref.
# https://caddyserver.com/docs/caddyfile/options
# https://caddyserver.com/docs/caddyfile/directives/route
# https://caddyserver.com/docs/caddyfile/directives#directive-order
route {
header Server intentionally-undisclosed-12345678
redir https://{host}{uri}
}
}
https://localhost {
reverse_proxy unix//tmp/foo.sock {
header_down Server intentionally-undisclosed-12345678
}
}
To check if it works just use curl as curl --insecure -I http://localhost and curl --insecure -I http://localhost (--insecure because localhost certs are automatically generated as self signed).
It's so simple to setup that you could also think to use it in development (with gunicorn --reload), especially if it resembles your staging/production environment.
I have an IIS 6 server running .net 4.0, that is returning 304 (Not Modified) for IE, but not for Firefox or Chrome. I never want this and other calls in the save .svc file to cache, but always to be 200 and new data. I'm not seeing this on localhost on Win7 that has IIS7.5 running.
The response header has a Cache-Content: private and when running under IIS 6, Fiddler doesn't even show the request.
[OperationContract]
[WebGet(UriTemplate = "usertoken/populate")]
public ClientUserToken PopulateUserToken()
{
return HttpContext.Current.Session["userToken"] as UserToken;
}
I can add a [AspNetCacheProfile("NoCachingProfile")] to each method and
<outputCacheSettings>
<outputCacheProfiles>
<add name="NoCachingProfile" noStore="true" duration="0" varyByParam="none" enabled="false"/>
</outputCacheProfiles>
</outputCacheSettings>
to the web.config, but I'd like to avoid having to add this to every operation.
Is there an IIS setting to do this or should it be code driven? Why does it happen only in IE?
This is not the service's problem, it is IE. Fiddler shows nothing because the browser didn't send anything, it's using the last cached response. Use Post if you can, otherwise make each Get request look different like:
url: "usertoken/populate?" + new Date().getTime()
Older IE is bad with ignoring no-cache with Gets