CouchDB Proxy Authentication Doesn't work - authentication

When I send a http request to my couchdb server like it is shown in the docs here CouchDB Proxy Authentication, it doesn't give the response shown in the docs, just empty data. What am I doing wrong?
Also, am I able to start a session with this Proxy Auth? If I try a POST /_session, I get 500 error code.
GET /_session HTTP/1.1
Host: 127.0.0.2:5984
User-Agent: curl/7.51.0
Accept: application/json
Content-Type: application/json; charset=utf-8
X-Auth-CouchDB-UserName: john
X-Auth-CouchDB-Roles: blogger
< HTTP/1.1 200 OK
< Cache-Control: must-revalidate
< Content-Length: 132
< Content-Type: application/json
< Date: Sun, 06 Nov 2016 01:10:58 GMT
< Server: CouchDB/2.0.0 (Erlang OTP/17)
<
{"ok":true,"userCtx":{"name":null,"roles":[]},"info":{"authentication_db":"_users","authentication_handlers":["cookie","default"]}}

I found in the CouchDB issue tracker that the Proxy Authentication is broken in version 2.0.0. Either that or the docs aren't updated to indicate that it only works with clusters or something. I changed back to version 1.6.1 and everything works fine. I must say that the documentation for how Proxy Authentication works is very poor.
How it works is you need your third party authentication server to have the "[couch_httpd_auth] secret" and when a client authenticates, you need to generate a HMAC-SHA1 token by combining the username and secret. Then, on any http requests you make from the client to the CouchDB server, if you include all the headers:
X-Auth-CouchDB-Roles
X-Auth-CouchDB-UserName
X-Auth-CouchDB-Token
that request will be authenticated as a user client.
Also, it is not mentioned in the docs, but POST on the /_session API using these headers does nothing.

It's not the Proxy Authentication itself which is broken in CouchDB 2.0, it's just that in the current release there's no way to configure the authentication handlers like there was in the old 1.6 days.
There are some patches mentioned in the issue tracker which add proxy authentication to the list of authentication handlers. Furthermore there was a pull request which was accepted and merged which brings back configurability to CouchDB 2.0.
However in order to take advantage of those I'm afraid you either have to wait until the next release, or build CouchDB 2.0 yourself from the sources.

Proxy authentication is fixed as of CouchDB 2.1.1. The latest (>2.1.1) documentation shows how to configure proxy authentication again, along with the important proxy_use_secret option.

Related

RabbitMQ Publish via Management HTTP API not_authorised but works in Web UI

I tried to publish a message to both the default exchange and also some other exchange via the HTTP Management API but I always get back an authorization error.
curl -i -u myuser:mypw -XPOST -d'{"properties":{},"routing_key":"my_key","payload":"my body","payload_encoding":"string"}' https://myinstance.rmq.cloudamqp.com/api/exchanges/vhost/myvhost/publish
HTTP/1.1 401 Unauthorized
Server: nginx/1.14.2
Date: Mon, 01 Apr 2019 05:27:10 GMT
Content-Type: application/json
Content-Length: 53
Connection: keep-alive
content-security-policy: default-src 'self'
vary: accept, accept-encoding, origin
{"error":"not_authorised","reason":"Access refused."}%
I tried it both on a self hosted RabbitMQ (installed via helm on k8s) and our CloudAMQP instance.
But if I login on the Management Web UI with the very same user then I can publish a message to the exchange and also consume from a queue.
I expect that the Management Web UI just uses the HTTP API for performing this actions so I am confused why it works when I do it via the UI.
Reading all vhost on the other hand works also with the HTTP API.
curl -i -u myuser:mypw https://myinstance.rmq.cloudamqp.com/api/vhosts
HTTP/1.1 200 OK
Can somebody explain to me whats going on there? What puzzels me the most is the fact that it works on the UI using the same user:pw.
I figured out the problem, I did use the wrong URL path.
For vhost: / and the default exchange it should be:
http://myinstance.rmq.cloudamqp.com/api/exchanges/%2F/amq.default/publish
In my case, using the CloudAmqp free plan, I needed to use my user name as vHost in rhe URL:
https://myinstance.rmq.cloudamqp.com/api/exchanges/[myrandomusernamefromfreeplan]/amq.default/publish

How to get Casperjs to work with Windows authentication

We need to test a site that require windows authentication. We have tried to automate it using Casperjs, but we kept on getting a 401.
We found that others had similar issue based on the following discussion . However, the discussion was closed with no real solutions.
Someone in that discussion noted that he/she used page.customHeader with additional workarounds, but no real steps were provided on how to get this to work.
We also tried updating url to http://username:passowrd#domain.com pattern & even that did not helped.
See Fiddler's sample response when i tried this in the
GET / HTTP/1.1
Host: host
HTTP/1.1 401 Access Denied
WWW-Authenticate: Negotiate
WWW-Authenticate: NTLM
GET / HTTP/1.1
Host: host
Authorization: NTLM TlRMTVNTUAABAAAAB4IAoAAAAAAAAAAAAAAAAAAAAAB=
HTTP/1.1 401 Access Denied
WWW-Authenticate: NTLM TlRMTVNTUAACAAAADAAMADAAAAAFgoGgCY6qiih5j bAAAAAAAAAAAH4AfgA8AAAAUABPAFIAVAA4ADAAAgAMAFAATwBSAFQA OAAwAAEACgBKAEwASQBNAEEABAAkAH
Actually there was a good workaround sugguested in issue discussion on PhantomJS github. You could use a local NTLM proxy and connect to it via CasperJS like so:
casperjs --proxy=localhost:3133 --ignore-ssl-error=true --ssl-protocol=any script.js

What makes Fiddler accelerate my iOS app rest call?

My iOS app makes Rest calls to my WCF web service.
The responding speed is very slow, over 3 min.
However, when I set up Fiddler as a Proxy to monitor the iOS traffic. The call was finished in 1 sec.
What does make Fiddler magically accelerate the Rest call from iOS?
p.s. Fiddler is setup on a windows PC where uses the same network with iOS App.
The rest call example (from Fiddler)
Request
GET https://xxxx.xxxx.com/Deals HTTP/1.1
Host: xxx.xxxx.com
Proxy-Connection: keep-alive
Accept-Encoding: gzip
Content-Type: application/json
Cookie: ASPXAUTH=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Connection: keep-alive
User-Agent: Natural xxxx x.x.x (iPad; iPhone OS 7.0.2; en_US)
Response
HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 891437
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/7.5
LastFetchDateTimeUTC: 2014-02-14T16:52:43.5465273Z
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Fri, 14 Feb 2014 16:52:45 GMT
Response body is a large json (2MB)
p.s.
Except for Fiddler, we also tried to install wireshark and use it to capture traffic on the mac while running the app from on the simulator.
We see a lot DUP ACK, I guess that's causing tcp re-transmission
p.s.
We pinged from iOS too, there is no delay to the WCF web service.
Help!
UPDATE:
We found out a problem, looks like the respond time decreases with the length of the body. Does it mean anything?
The WireShark logs should provide you plenty of information about what happens in each case. When Fiddler "magically" makes things faster, it's typically due to:
Better connection reuse (e.g. Fiddler may reuse connections better than client)
Better buffer sizes (e.g. not using tiny buffers for read/write)
Non-broken proxy determination behavior
I wrote a bit about these in this blog post.
We solved this problem by proving that server is a shitty one. We deployed the same service on another VM and it works. Must be the a broken network card

Terrible Apache Bench results on Custom CMS

Please note: This is not a complain about a shoddy CMS.
Just toying with Apache Bench and got terrible results with our custom CMS, more exactly i got:
Requests per second: 0.37 [#/sec] (mean)
When i run another test with a plain php file i got:
Requests per second: 4786.07 [#/sec] (mean)
Another test with a previous version of the CMS:
Requests per second: 6068.66 [#/sec] (mean)
The website(s) are working fine, no problems detected, Google's Webmaster Tools reports our sites as faster than 80% of the pages which is fine, i think.
The test was:
ab -t 30 -c 10 http://example.com/
Maybe some kind of Apache problem? Bad .htaccess config, or similar?
Update:
Just ran a simple test with sockets and the results are similar. Page loads very, very slowly. If i ran my script with another website everything is fine.
Also, there's a small hint about a chunk length problem. (Bad Apache Headers, or line endings?)
The site is gzipped, and when verbose logging turned on, i see these lines in the response:
LOG: Response code = 200
LOG: header received:
HTTP/1.1 200 OK
Date: Tue, 04 Oct 2011 13:10:49 GMT
Server: Apache
Set-Cookie: PHPSESSID=ibnfoqir9fee2koirfl5mhm633; path=/
Expires: Sat, 26 Jul 1997 05:00:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Cache-Control: post-check=0, pre-check=0
Vary: Accept-Encoding
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
2ef6
Always at the same place, in the middle of the HTML-source, then <!DOCTYPE HTML> again.
Please, help.
Update #2:
Just checked my HTTP headers with Rex Swain's HTTP Viewer and got these results:
HTTP/1.1·200·OK(CR)(LF)
Date:·Wed,·05·Oct·2011·08:33:51·GMT(CR)(LF)
Server:·Apache(CR)(LF)
Set-Cookie:·PHPSESSID=n88g3qcvv9p6irm1fo0qfse8m2;·path=/(CR)(LF)
Expires:·Sat,·26·Jul·1997·05:00:00·GMT(CR)(LF)
Cache-Control:·no-store,·no-cache,·must-revalidate(CR)(LF)
Pragma:·no-cache(CR)(LF)
Cache-Control:·post-check=0,·pre-check=0(CR)(LF)
Vary:·Accept-Encoding(CR)(LF)
Connection:·close(CR)(LF)
Transfer-Encoding:·chunked(CR)(LF)
Content-Type:·text/html;·charset=UTF-8(CR)(LF)
(CR)(LF)
Do you notice anything unusual?
If it works well with ordinary web browsers (as you mentioned in the comments) the CMS handle the requests from Apache Benchmark differently.
A quick checklist:
AFAIK Apache Benchmark just send simple requests without any cookie handling, so try to set -C with a valid cookie (copy the values from a web browser).
Try to send exactly the same headers to the CMS as the web browser sends. Save a dump of a valid request with netcat, HttpFox or a packet sniffer and set the missing headers with -H.
Profile the CMS on the server while you're sending to it a request with Apache Benchmark. Maybe you found the bottleneck. Two poor man's error_log calls with a timestamp in the first and the last line of the index.php (or the tested script's entry point) could show how fast is the PHP script and help to calculate the overhead of the Apache HTTP Server and network.
If you run socket tests and browser tests from different machines it's could be a DNS issue (turn off HostnameLookups in Apache). Try to run them from the same machine.
Try ab -k ... or ab -H "Connection: close" ....
I guess the CMS does some costly initialization when it initializes the session and it's happens when it processes the first request. Since Apache Benchmark does not send the cookies back the CMS it creates a new session for every request and it's the cause of the slow answers.
A second guess is that the CMS handle the incoming http headers differently and the headers which was sent (or the lack of them) by Apache Benchmark trigger some costly/slow processing. It looks more appropriate since the report of the Google's Webmaster Tools.
Apache Benchmark sends HTTP 1.0 request, for example:
GET / HTTP/1.0
Host: localhost:9100
User-Agent: ApacheBench/2.3
Accept: */*
It looks to me that your server does not send any http header about Keep-Alive settings but it assumes that the client uses keep-alive when the client uses HTTP 1.0. It's not an RFC compliant behaviour:
From RFC 2616, 19.6.2 Compatibility with HTTP/1.0 Persistent Connections:
Some clients and servers might wish to be compatible with some
previous implementations of persistent connections in HTTP/1.0
clients and servers. Persistent connections in HTTP/1.0 are
explicitly negotiated as they are not the default behavior.
By default Apache Benchmark doesn't use keep-alive so it waits when the response arrives for the closing of the socket. The server closes it after 15 seconds idle. Downloading the main page with wget also takes 15 seconds. Wget also uses HTTP 1.0 in the request.
I think it's a bug in the PHP code of the CMS since ab works well on the same server with a plain php file. Anyway, you can workaround it with using keep-alive connections (-k):
ab -k -t 30 -c 10 http://example.com/
or with explicitly disabling persistent connections:
ab -H "Connection: close" -t 30 -c 10 http://example.com/
but it's still a server side issue and your original ab commands is right.
Please note that this bug probably affects only HTTP 1.0 clients (like Apache Benchmark, wget) and clients with regular browsers will not notice it.

twitter api issue

I'm integrating "Sign in with twitter account" function at my site.
So, I'm sending request to https ://twitter.com/oauth/request_token, getting token, making redirect to https ://twitter.com/oauth/authenticate?oauth_token=%oauth_token%
Then I recieving call back with oauth_token and oauth_verifier
This goes fine.
But than I need to call https ://api.twitter.com/1/account/verify_credentials.json to get authorizated client details
I'm sending:
GET https ://api.twitter.com/1/account/verify_credentials.json
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: q=0.8,en-us;q=0.5,en;q=0.3
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1
X-Auth-Service-Provider: https ://api.twitter.com/1/account/verify_credentials.json
X-Verify-Credentials-Authorization: OAuth realm="http://api.twitter.com/", oauth_signature="acYFjEgUrTcyb4FMBoJF8MlwZGw%3D", oauth_timestamp="1286899670", oauth_consumer_key="%CONSUMER_KEY%", oauth_nonce="268310006", oauth_token="%oauth_token%", oauth_version="1.0", oauth_signature_method="HMAC-SHA1"
%oauth_token% - token got when twitter redirects me back the cleint
%CONSUMER_KEY% - my twitter account's consumer key
And getting back
HTTP/1.1 401 Unauthorized
Cache-Control: no-cache, max-age=300
Connection: close
Date: Tue, 12 Oct 2010 16:07:45 GMT
Server: hi
Vary: Accept-Encoding
WWW-Authenticate: Basic realm="Twitter API"
{"error":"Could not authenticate you.","request":"/1/account/verify_credentials.json"}
Can anyone plz advice me what's wrong here?
Thanks!
After you receive the callback you have to make request to POST oauth/access_token to exchange the temporary request_token for a permanent access_token associated with the user. Once you receive the access_token you can perform the GET account/verify_credentials request.
Here is a good flow chart explaining how the full OAuth process works.
Flow Chart
It sounds like you're two thirds of the way through the authentication. Now you need to exchange your authorised request token for a permanent access token.
You are using header to pass parameters (X-Verify-Credentials-Authorization), instead you should be using GET method. If you are using php Zend framework's OAuth component, then it should look like
$client->setMethod(Zend_Http_Client::GET);