I'm using drupal 7 with the Media Youtube module. The module calls the youtube oEmbed API.
This is an example of url the module will call:
http://www.youtube.com/oembed?url=https://www.youtube.com/watch?v=YZqqD1Rv5BI
On my desktop, this returns a json file, all fine.
However on my website's server, I get a 503 Service unavailable error.
Actually I frst get a 302 not found, saying the url has moved, then a 503 error:
Here is what I get when I do a wget manually:
wget http://www.youtube.com/oembed?url=https://www.youtube.com/watch?v=YZqqD1Rv5BI
--2014-09-28 21:55:49-- http://www.youtube.com/oembed?url=https://www.youtube.com/watch?v=YZqqD1Rv5BI
Resolving www.youtube.com (www.youtube.com)... 2a00:1450:4007:808::1004, 173.194.40.131, 173.194.40.132, ...
Connecting to www.youtube.com (www.youtube.com)|2a00:1450:4007:808::1004|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://ipv6.google.com/sorry/IndexRedirect?continue=http://www.youtube.com/oembed%3Furl%3Dhttps://www.youtube.com/watch%3Fv%3DYZqqD1Rv5BI [following]
--2014-09-28 21:55:49-- http://ipv6.google.com/sorry/IndexRedirect?continue=http://www.youtube.com/oembed%3Furl%3Dhttps://www.youtube.com/watch%3Fv%3DYZqqD1Rv5BI
Resolving ipv6.google.com (ipv6.google.com)... 2a00:1450:4007:808::1008
Connecting to ipv6.google.com (ipv6.google.com)|2a00:1450:4007:808::1008|:80... connected.
HTTP request sent, awaiting response... 503 Service Unavailable
2014-09-28 21:55:49 ERROR 503: Service Unavailable.
Any help would be much much appreciated.
Thanks in advance
I met the same error, but the issue is not with the module, unfortunately (I would be able to fix it and commit a patch for the module mantainer).
As you already tested, even a simple wget gives the same issue and it comes from the fact that it uses IPv6. If you can force requests to youtube to be handled through IPv4 the problem will be solved. But this is just a work-around, not a real fix.
I found a way to force IPv4 DNS resolving in PHP:
$url = 'http://www.youtube.com/oembed?format=json&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DZVSd5aSXlQ0';
$c = curl_init();
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_URL, $url);
curl_setopt($c, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4);
$json = curl_exec($c);
$status = curl_getinfo($c,CURLINFO_HTTP_CODE);
curl_close($c);
This solution is actually working for me.
Related
I have an esp8266 which was directly sending http requests to http://fcm.googleapis.com/fcm/send but since google seems have stopped allowing requests to be send via http, I need to find a new solution.
I started down a path to have the esp8266 directly send the request via https and while it works on a small example the memory footprint required for the https request is to much in my full application and I end up crashing the esp8266. While there are still some avenues to explore that might allow me to continue to directly send messages to the server, I think I would like to solve this by sending the request via http to a local "server" raspberry pi, and have that send the request via https.
While I could run a small web server and some code to do handle the requests, it seems like this is exactly something traffic-server should be able to do for me.
I thought this should be a one liner. I added the following the the remap.config file.
redirect http://192.168.86.77/fcm/send https://fcm.googleapis.com/fcm/send
where 192.168.86.77 is the local address of my raspberry pi.
When I send requests to http://192.168.86.77/fcm/send:8080 I get back the following:
HTTP/1.1 404 Not Found
Date: Fri, 20 Sep 2019 16:22:14 GMT
Server: Apache/2.4.10 (Raspbian)
Content-Length: 288
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /fcm/send:8080 was not found on this server.</p>
<hr>
<address>Apache/2.4.10 (Raspbian) Server at localhost Port 80</address>
</body></html>
I think 8080 is the right port.
I am guessing this is not the one liner I thought it should be.
Is this a good fit for apache-traffic-controller?
Can someone point me to what I am doing wrong and what is the right way to accomplish my goal?
Update:
Based on Miles Libbey answer below, I needed to make the following update to the Arduino/esp8266 code.
Change:
http_.begin("http://fcm.googleapis.com/fcm/send");
To:
http_.begin("192.168.86.77", 8080, "http://192.168.86.77/fcm/send");
where http_ is the instance of the HTTPClient
And after installing trafficserver on the my raspberry pi, I needed to add the following two lines to the /etc/trafficserver/remap.config
map http://192.168.86.77/fcm/send https://fcm.googleapis.com/fcm/send
reverse_map https://fcm.googleapis.com/fcm/send http://192.168.86.77/fcm/send
Note the reverse_map line is only needed if you want to get feedback from fcm, ie if the post was successful or not.
I would try a few changes:
- I'd use map:
map http://192.168.86.77/fcm/send https://fcm.googleapis.com/fcm/send instead of redirect. The redirect is meant to send your client a 301, and then your client would follow it, which sounds like it'd defeat your purpose. map should have ATS do the proxying.
- I think your curl may have been off -- the port usually goes after the domain part -- eg, curl "http://192.168.86.77:8080/fcm/send". (and probably better:
curl -x 192.168.86.77:8080 "http://192.168.86.77:8080/fcm/send", so that the port isn't part of the remapping.
(sorry for my bad english)
I did the follow istructions:
https://support.google.com/domains/answer/6147083?hl=it
(i made a dynamic dns)
i made a script in linux with:
https://username:password#domains.google.com/nic/update?hostname=www.systemcamera.org
i hoped it's ok, but the domain doesn't works, when i type www.systemcamera.org doesn't works, but when i type the ip address it's works
in the script i typed:
wget
https://username:pswd.google.com/nic/update?hostname=www.systemcamera.org
-O dns_update_result$
i runned (./script), but i don't find my website
the result when i run the script:
--2018-06-26 13:14:31-- https://5RodFTlBOsTBB1gL:password#domains.google.com/nic/update?hostname=www.systemcamera.org
Resolving domains.google.com (domains.google.com)... 172.217.23.110, 2a00:1450:4002:800::200e
Connecting to domains.google.com (domains.google.com)|172.217.23.110|:443... connected.
HTTP request sent, awaiting response... 401 Unauthorized
Authentication selected: Basic realm="Google Domains Dynamic Dns (www.systemcamera.org)"
Reusing existing connection to domains.google.com:443.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
Saving to: ‘dns_update_results.txt’
dns_update_results.txt [ <=> ] 19 --.-KB/s in 0s
I have 3 server keys in my google console for different servers.
In the beginning only one key seemed to work, and that was my local IP address.
After debugging with cURL on the staging server I found that the response was:
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "dailyLimitExceededUnreg",
"message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup.",
"extendedHelp": "https://code.google.com/apis/console"
}
],
"code": 403,
"message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup."
}
}
This didn't really make sense to me, because the key was provided and the key was definitely set with the appropriate IP address.
So I started the debugging process and for the staging server I tried some cURL IP discovery tools.
Suddenly icanhazip.com gave me the IPv6 address to my server, after adding this to the Allowed IP list it suddenly worked. Some weird behavior if you ask me.
So I still had my Production server to fix, and I found out that this one doesn't have an IPv6 address. The same tool returns the IPv4 address for me and in my control panel I also didn't set up an IPv6 address.
As google doesn't have a big support platform, I'm hoping someone here has ran into the same problem.
I'm trying to use google's custom search API.
Apparently I was using cURL in the wrong way on my server. It now works like a charm!
(Had to add apostrophes around the URL)
If everything else fails, you could just scrape a google search page and parse the results? Something like this (PHP):
$ch = curl_init ('');
$query = 'Pepijn';
curl_setopt ($ch, CURLOPT_URL, 'http://www.google.com/search?hl=en&tbo=d&site=&source=hp&q='.$query);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
$output = curl_exec($ch);
curl_close($ch);
echo $output;
You would need to "sort" through all the returned html and all, but the results are basically in the list items in #search ol.
My client is using Unleashedsoftware.com to connect to a Magento Store. But it gives this error.
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Body>
<SOAP-ENV:Fault>
<faultcode>WSDL</faultcode>
<faultstring>
SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://www.domain.com/index.php/api/v2_soap/index/wsdl/1/' : Premature end of data in tag definitions line 2
</faultstring>
</SOAP-ENV:Fault>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
When browsing http://www.domain.com/index.php/api/v2_soap/index/ Firebug gives me “500 Internal Service Error”.
When I browse http://www.domain.com/index.php/api/v2_soap/index/wsdl/1/, I am getting valid XML data.
I checked the server log files and it seems like:
[Thu Aug 30 22:22:25 2012] [warn] [client 92.92.92.92] mod_fcgid: stderr: in /home/doaminuser/public_html/lib/Zend/Soap/Server.php on line 762
I been searching for couple of days now and today I tried to duplicate the entire site to another test server, and it seems to be working! So that seems to be a server issue.
Please, anybody got any idea what could be the issue?
Is there any better way of debugging this issue, any sample code or debugging tips.
Magento version is 1.6.2
Thank you.
There's lots of times where Magento's SOAP API fails due to problems your Magento server has communicating with itself.
That is, PHP's SOAP implementation requires that the SOAP server itself fetch the WSDL file via http, and a local network configuration issue gets in the way of Magento fetching it's own WSDL.
You can debug this by SSHing into your Magento server, and running the following command
curl -l 'http://www.example.com/index.php/api/v2_soap/index/wsdl/1/' > /tmp/wsdl.xml
and then examining the wsdl.xml file. Because you're performing this from your web-server, you may get different results than when you're performing it from your local browser.
I had a similar problem when calling the URL
http://www.store.com/index.php/api/v2_soap/?wsdl
After some time I received the message 500 - Internal Server Error and a Premature end of script headers message in the apache error log.
After a whole day of research I figured out, that the Timeout-Directive of the Apache module (configured in httpd.conf on a Linux environment) was set to "20" which caused the server to send the 500 error after 20 seconds. The problem is, that in my case the Magento system needs a longer time to "crawl" through all wsdl.xml files in order to build the WSDL-output (if you are using Magento SOAPv2).
Maybe you should check your Timeout Directive..hope that helps.
"I have memories of this. What worked for me was to put the hostname
in /etc/hosts on the server plus the www alias on 127.0.0.1 However,
in this instance the server was in the building rather than in some
ISP place and the LAN had Windows computers on it. Windows users had
downloaded lots of trojan-virus-porn things that were spending the
whole time spamming the network so the real problem was with the
Windows computers on the network, not with the server or with Magento.
After fdisking the PC's the problem was solved."
Thank You I've been struggling for 2 days with this on magento 1.6 and Windows Server 2008 adding this line to the hosts file (C:\Windows\System32\drivers\etc) solved the issue for me:
127.0.0.1 www.Domain.com
also remember to fix your magento soap (role) because the Roles Resources doesn't save in 1.6 unless you fix this file:
MagentoRoot\app\code\core\Mage\Adminhtml\Block\Api\Tab\Rolesedit.php
replace this:
if (array_key_exists(strtolower($item->getResource_id()), $resources) && $item->getPermission() == 'allow') {
with this:
if (array_key_exists(strtolower($item->getResource_id()), $resources) && $item->getApiPermission() == 'allow') {
In my case the issue was the Mod_Security rule "PHP Easter Egg Access" was enabled.
Rule ID: 380800
Once disabled, the api access worked.
An indicator was in the Apache log file:
Jun 19 09:15:52 httpd[1024961]: [error] [client xyz.xyz.xyz.xyz] ModSecurity: [file "/usr/local/apache/conf/modsec/99_asl_jitp.conf"] [line "116"] [id "380800"] [rev "1"] [msg "Atomicorp.com WAF Rules - Virtual Just In Time Patch: PHP Easter Egg Access"] [data "phpe9568f35-d428-11d2-a769-00aa001acf42"] [severity "CRITICAL"] Access denied with code 403 (phase 2). Pattern match "php(?:e9568f3[56]-d428-11d2-a769-00aa001acf42|b8b5f2a0-3c92-11d3-a3a9-4c7b08c10000)" at REQUEST_URI. [hostname "www.yoursever.com"]...
Magento version: 1.7.0.2
PHP version: 5.3.26
More information about the PHP Easter Egg Access rule:
http://www.atomicorp.com/forums/viewtopic.php?f=3&t=5057
http://www.0php.com/php_easter_egg.php
For those wanting a quick test script to replicate the issue (useful when trying to convince your hosting provider that it's a problem on their end), use:
<?php
$server = new SoapServer("http://<url to your magento shop>/index.php/api/v2_soap/index/wsdl/1/");
?>
This is the line in /lib/Zend/Soap/Server.php that triggers the error.
In my case if you browsed to:
http://< url to your magento shop >/index.php/api/v2_soap/index/wsdl/1/
the xml was fine, but if you ran the above php script on the server, the error was given.
This error most often appeared for me while omitting www for domain given in Magento SOAP url. Url has to match base url specified in the Magento config.
I'm trying to download multiple pdf files from a web page (I'm using Mac OS X 10.6.1). Here is a example what I'm getting (www.website.org is just for example):
~> wget -r -A.pdf http://www.website.org/web/
--2009-10-09 19:04:53-- http://www.website.org/web/
Resolving www.website.org... 208.43.98.107
Connecting to www.website.org|208.43.98.107|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2009-10-09 19:04:54 ERROR 403: Forbidden.
~>
How can I overcome this 403 error? Should I use curl instead?
Perhaps you don't have permission to access the directory http://www.website.org/web/. Use this link in your web browser and see if you are redirected to a more specific URL. Then use that URL as input for wget.