I have a localhost setup using MAMP PRO and XIP.IO for sharing on my local network.
I'm also trying to test API requests from with the same application but I keep getting the following error in the log file even though I am using the correct API credentials which work on a remote server.
2015-12-20T12:52:52+00:00 DEBUG (7): HTTP/1.1 401 Unauthorized
Content-type: text/html
Date: Sun, 20 Dec 2015 12:52:52 GMT
Server: nginx
Www-authenticate: Basic realm="very closed site"
Content-length: 188
Connection: keep-alive
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>
If this is indeed due to being on a localhost is there a way to recieve API callbacks using MAMP PRO?
If you want the third party API to be able to send you a post back, your local website/app must be accessible when entering your public IP.
So if I understand your problem you just have to configure your router (or internet provider box) and open a port that you redirect to your local MAMP Pro. You can find a lot of tutorial for "Access MAMP Pro remotely"
WARNING : Do this for tests and then close the port you openned not to leave a security breach
Related
I have an esp8266 which was directly sending http requests to http://fcm.googleapis.com/fcm/send but since google seems have stopped allowing requests to be send via http, I need to find a new solution.
I started down a path to have the esp8266 directly send the request via https and while it works on a small example the memory footprint required for the https request is to much in my full application and I end up crashing the esp8266. While there are still some avenues to explore that might allow me to continue to directly send messages to the server, I think I would like to solve this by sending the request via http to a local "server" raspberry pi, and have that send the request via https.
While I could run a small web server and some code to do handle the requests, it seems like this is exactly something traffic-server should be able to do for me.
I thought this should be a one liner. I added the following the the remap.config file.
redirect http://192.168.86.77/fcm/send https://fcm.googleapis.com/fcm/send
where 192.168.86.77 is the local address of my raspberry pi.
When I send requests to http://192.168.86.77/fcm/send:8080 I get back the following:
HTTP/1.1 404 Not Found
Date: Fri, 20 Sep 2019 16:22:14 GMT
Server: Apache/2.4.10 (Raspbian)
Content-Length: 288
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /fcm/send:8080 was not found on this server.</p>
<hr>
<address>Apache/2.4.10 (Raspbian) Server at localhost Port 80</address>
</body></html>
I think 8080 is the right port.
I am guessing this is not the one liner I thought it should be.
Is this a good fit for apache-traffic-controller?
Can someone point me to what I am doing wrong and what is the right way to accomplish my goal?
Update:
Based on Miles Libbey answer below, I needed to make the following update to the Arduino/esp8266 code.
Change:
http_.begin("http://fcm.googleapis.com/fcm/send");
To:
http_.begin("192.168.86.77", 8080, "http://192.168.86.77/fcm/send");
where http_ is the instance of the HTTPClient
And after installing trafficserver on the my raspberry pi, I needed to add the following two lines to the /etc/trafficserver/remap.config
map http://192.168.86.77/fcm/send https://fcm.googleapis.com/fcm/send
reverse_map https://fcm.googleapis.com/fcm/send http://192.168.86.77/fcm/send
Note the reverse_map line is only needed if you want to get feedback from fcm, ie if the post was successful or not.
I would try a few changes:
- I'd use map:
map http://192.168.86.77/fcm/send https://fcm.googleapis.com/fcm/send instead of redirect. The redirect is meant to send your client a 301, and then your client would follow it, which sounds like it'd defeat your purpose. map should have ATS do the proxying.
- I think your curl may have been off -- the port usually goes after the domain part -- eg, curl "http://192.168.86.77:8080/fcm/send". (and probably better:
curl -x 192.168.86.77:8080 "http://192.168.86.77:8080/fcm/send", so that the port isn't part of the remapping.
I tried to publish a message to both the default exchange and also some other exchange via the HTTP Management API but I always get back an authorization error.
curl -i -u myuser:mypw -XPOST -d'{"properties":{},"routing_key":"my_key","payload":"my body","payload_encoding":"string"}' https://myinstance.rmq.cloudamqp.com/api/exchanges/vhost/myvhost/publish
HTTP/1.1 401 Unauthorized
Server: nginx/1.14.2
Date: Mon, 01 Apr 2019 05:27:10 GMT
Content-Type: application/json
Content-Length: 53
Connection: keep-alive
content-security-policy: default-src 'self'
vary: accept, accept-encoding, origin
{"error":"not_authorised","reason":"Access refused."}%
I tried it both on a self hosted RabbitMQ (installed via helm on k8s) and our CloudAMQP instance.
But if I login on the Management Web UI with the very same user then I can publish a message to the exchange and also consume from a queue.
I expect that the Management Web UI just uses the HTTP API for performing this actions so I am confused why it works when I do it via the UI.
Reading all vhost on the other hand works also with the HTTP API.
curl -i -u myuser:mypw https://myinstance.rmq.cloudamqp.com/api/vhosts
HTTP/1.1 200 OK
Can somebody explain to me whats going on there? What puzzels me the most is the fact that it works on the UI using the same user:pw.
I figured out the problem, I did use the wrong URL path.
For vhost: / and the default exchange it should be:
http://myinstance.rmq.cloudamqp.com/api/exchanges/%2F/amq.default/publish
In my case, using the CloudAmqp free plan, I needed to use my user name as vHost in rhe URL:
https://myinstance.rmq.cloudamqp.com/api/exchanges/[myrandomusernamefromfreeplan]/amq.default/publish
When I send a http request to my couchdb server like it is shown in the docs here CouchDB Proxy Authentication, it doesn't give the response shown in the docs, just empty data. What am I doing wrong?
Also, am I able to start a session with this Proxy Auth? If I try a POST /_session, I get 500 error code.
GET /_session HTTP/1.1
Host: 127.0.0.2:5984
User-Agent: curl/7.51.0
Accept: application/json
Content-Type: application/json; charset=utf-8
X-Auth-CouchDB-UserName: john
X-Auth-CouchDB-Roles: blogger
< HTTP/1.1 200 OK
< Cache-Control: must-revalidate
< Content-Length: 132
< Content-Type: application/json
< Date: Sun, 06 Nov 2016 01:10:58 GMT
< Server: CouchDB/2.0.0 (Erlang OTP/17)
<
{"ok":true,"userCtx":{"name":null,"roles":[]},"info":{"authentication_db":"_users","authentication_handlers":["cookie","default"]}}
I found in the CouchDB issue tracker that the Proxy Authentication is broken in version 2.0.0. Either that or the docs aren't updated to indicate that it only works with clusters or something. I changed back to version 1.6.1 and everything works fine. I must say that the documentation for how Proxy Authentication works is very poor.
How it works is you need your third party authentication server to have the "[couch_httpd_auth] secret" and when a client authenticates, you need to generate a HMAC-SHA1 token by combining the username and secret. Then, on any http requests you make from the client to the CouchDB server, if you include all the headers:
X-Auth-CouchDB-Roles
X-Auth-CouchDB-UserName
X-Auth-CouchDB-Token
that request will be authenticated as a user client.
Also, it is not mentioned in the docs, but POST on the /_session API using these headers does nothing.
It's not the Proxy Authentication itself which is broken in CouchDB 2.0, it's just that in the current release there's no way to configure the authentication handlers like there was in the old 1.6 days.
There are some patches mentioned in the issue tracker which add proxy authentication to the list of authentication handlers. Furthermore there was a pull request which was accepted and merged which brings back configurability to CouchDB 2.0.
However in order to take advantage of those I'm afraid you either have to wait until the next release, or build CouchDB 2.0 yourself from the sources.
Proxy authentication is fixed as of CouchDB 2.1.1. The latest (>2.1.1) documentation shows how to configure proxy authentication again, along with the important proxy_use_secret option.
We need to test a site that require windows authentication. We have tried to automate it using Casperjs, but we kept on getting a 401.
We found that others had similar issue based on the following discussion . However, the discussion was closed with no real solutions.
Someone in that discussion noted that he/she used page.customHeader with additional workarounds, but no real steps were provided on how to get this to work.
We also tried updating url to http://username:passowrd#domain.com pattern & even that did not helped.
See Fiddler's sample response when i tried this in the
GET / HTTP/1.1
Host: host
HTTP/1.1 401 Access Denied
WWW-Authenticate: Negotiate
WWW-Authenticate: NTLM
GET / HTTP/1.1
Host: host
Authorization: NTLM TlRMTVNTUAABAAAAB4IAoAAAAAAAAAAAAAAAAAAAAAB=
HTTP/1.1 401 Access Denied
WWW-Authenticate: NTLM TlRMTVNTUAACAAAADAAMADAAAAAFgoGgCY6qiih5j bAAAAAAAAAAAH4AfgA8AAAAUABPAFIAVAA4ADAAAgAMAFAATwBSAFQA OAAwAAEACgBKAEwASQBNAEEABAAkAH
Actually there was a good workaround sugguested in issue discussion on PhantomJS github. You could use a local NTLM proxy and connect to it via CasperJS like so:
casperjs --proxy=localhost:3133 --ignore-ssl-error=true --ssl-protocol=any script.js
I've come across an issue where a web application has managed to create a cookie on the client, which, when submitted by the client to Apache, causes Apache to return the following:
HTTP/1.1 400 Bad Request
Date: Mon, 08 Mar 2010 21:21:21 GMT
Server: Apache/2.2.3 (Red Hat)
Content-Length: 7274
Connection: close
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
Size of a request header field exceeds server limit.<br />
<pre>
Cookie: ::: A REALLY LONG COOKIE ::: </pre>
</p>
<hr>
<address>Apache/2.2.3 (Red Hat) Server at www.foobar.com Port 80</address>
</body></html>
After looking into the issue, it would appear that the web application has managed to create a really long cookie, over 7000 characters. Now, don't ask me how the web application was able to do this, I was under the impression browsers were supposed to prevent this from happening. I've managed to come up with a solution to prevent the cookies from growing out of control again.
The issue I'm trying to tackle is how do I reset the large cookie on the client if every time the client tries to submit a request to Apache, Apache returns a 400 client error? I've tried using the ErrorDocument directive, but it appears that Apache bails on the request before reaching any custom error handling.
Oh dear! I think you'll have to at increase the LimitRequestFieldSize configuration option in Apache's httpd.conf to go any further, so you can get as far as running the server-side script. Make sure it cleans up the cookies as quickly as possible before they start to grow again!