I try to develop Java client to my site but can't store cookies in httpclient4, server send in headers in get request :
Set-Cookie: PHPSESSID=ea384f86b9b89a749f1684d9d3980820; path=/
But in code after request I make :
CookieManager m = (CookieManager) CookieHandler.getDefault();
System.out.println("Count : " + m.getCookieStore().getCookies().size());
And I always obtain Count : 0
Httpclient creation :
CookieManager cookiem = new CookieManager();
cookiem.setCookiePolicy(CookiePolicy.ACCEPT_ALL);
CookieHandler.setDefault(cookiem);
...
httpClient = new DefaultHttpClient(params);
httpClient.getParams().setParameter(ClientPNames.COOKIE_POLICY, org.apache.http.client.params.CookiePolicy.BEST_MATCH);
But I obtain same result, what is wrong?
CookieManager is a Java 6 specific class used by JRE's internal HTTP client.
Apache HttpClient manages HTTP state differently and cannot (and probably should not) make use of Java 6 specific classes.
For details on HTTP state management with Apache HttpClient please see:
http://hc.apache.org/httpcomponents-client-ga/tutorial/html/statemgmt.html
Related
We have Hashicorp Vault KV v1 engine mounted at /foo instead of /v1. How do I tell Spring Vault library to use /foo? Using Spring Vault 2.3.2, Spring Boot 2.7.3, Java 11.
I pass in foo/path/to/secret and I get 403 Forbidden at /v1/foo/path/to/secret.
#Autowired
VaultOperations vault;
...
public Foo getFoo( String fooName ) {
VaultResponseSupport<Foo> response = vault.read("foo/path/to/" + fooName, Foo.class);
if ( null != response && response.getData() != null ) {
return response.getData();
}
throw new FooException( "No foo found" + fooName, HttpStatus.BAD_REQUEST );
}
Note: The details of the HTTP requests and responses were found by wire logging org.apache.http.wire = DEBUG.
Note: login using approle is successful.
POST /v1/auth/approle/login HTTP/1.1
X-Vault-Namespace: acme/foo
...
{"role_id":"deadbeef-nota-real-idno-abcdef123456","secret_id":"deadbeef-fake-valu-1234-abcdef123456"}
...
HTTP/1.1 200 OK
...
GET /v1/kv-srs/gdk/client/BETA1-GW HTTP/1.1
X-Vault-Token: token-from-response-above
...
HTTP/1.1 403 Forbidden
Note: I considered adding the /foo path to the VaultEndpoint in our configuration, but /v1 is working correctly for the login. So, whatever we do, it has to work with the default login path.
Note: I'm not trying to map secrets to properties or anything else. Just look up a secret value by its key.
I'm invoking a HTTP GET request to another system using RESTEasy with resteasy-client:3.12.1.Final (provided by WildFly 20.0.1.Final).
ResteasyClient client = new ResteasyClientBuilder().build;
ResteasyWebTarget target = client.target(fromPath(url));
Response response = target.request()
.header(AUTHORIZATION, "Basic <authentication_token>")
.accept(APPLICATION_JSON)
.get()
As you can see, I don't configure anything "special" in the ResteasyClientBuilder but for some reason all requests contain this header parameter: Accept-Encoding: gzip which causes some trouble on the remote side.
The RESTEasy documentation however states:
RESTEasy supports (though not by default - see below) GZIP
decompression. If properly configured, the client framework or a
JAX-RS service, upon receiving a message body with a Content-Encoding
of "gzip", will automatically decompress it. The client framework can
(though not by default - see below) automatically set the
Accept-Encoding header to be "gzip, deflate" so you do not have to set
this header yourself.
From my understanding the gzip parameter should not be set by default. Or are there any other possible default configurations which might add this parameter?
You might want to try this:
Variant variant = new Variant(MediaType.JSON_APPLICATION, "", "gzip");
Response response = client.target(generateURL("/big/send")).request().post(Entity.entity(b, variant));
I was able to setup the CherryPy HTTPServer to require an SSL client certificate using the following code:
ssl_certificate = os.environ.get("SSL_CERTIFICATE")
ssl_adapter = BuiltinSSLAdapter(
certificate=ssl_certificate,
private_key=os.environ["SSL_PRIVATE_KEY"],
certificate_chain=os.environ.get("SSL_CERTIFICATE_CHAIN")
)
verify_mode = ssl.CERT_REQUIRED
ssl_adapter.context.verify_mode = verify_mode
HTTPServer.ssl_adapter = ssl_adapter
Now I am trying to get the SSL client certification information from my request handler, but I can't figure how. After reading https://github.com/cherrypy/cheroot/blob/master/cheroot/ssl/builtin.py#L419 it seems that the wsgi environment variables should be populated with SSL_CLIENT* variables. I could not find any method/property from the request objectwould allow me to fetch such information
How can I obtain this variables from a request handler ?
I have learned the answer from a conversation with CherryPy maintainers on Gitter.
An CherryPy request object may contain more attributes than those documented in the API source, such attributes are set dynamically once the request object is created as part of the WSGI handling:
https://github.com/cherrypy/cherrypy/blob/master/cherrypy/_cpwsgi.py#L319
...
request.multithread = self.environ['wsgi.multithread']
request.multiprocess = self.environ['wsgi.multiprocess']
request.wsgi_environ = self.environ
...
Knowing this, to obtain the WSGI environment which includes the SSL* variables, we just need to access it by importing the request object:
import cherrypy.request
...
print(cherrypy.request.wsgi_environ)
...
When I send a http request to my couchdb server like it is shown in the docs here CouchDB Proxy Authentication, it doesn't give the response shown in the docs, just empty data. What am I doing wrong?
Also, am I able to start a session with this Proxy Auth? If I try a POST /_session, I get 500 error code.
GET /_session HTTP/1.1
Host: 127.0.0.2:5984
User-Agent: curl/7.51.0
Accept: application/json
Content-Type: application/json; charset=utf-8
X-Auth-CouchDB-UserName: john
X-Auth-CouchDB-Roles: blogger
< HTTP/1.1 200 OK
< Cache-Control: must-revalidate
< Content-Length: 132
< Content-Type: application/json
< Date: Sun, 06 Nov 2016 01:10:58 GMT
< Server: CouchDB/2.0.0 (Erlang OTP/17)
<
{"ok":true,"userCtx":{"name":null,"roles":[]},"info":{"authentication_db":"_users","authentication_handlers":["cookie","default"]}}
I found in the CouchDB issue tracker that the Proxy Authentication is broken in version 2.0.0. Either that or the docs aren't updated to indicate that it only works with clusters or something. I changed back to version 1.6.1 and everything works fine. I must say that the documentation for how Proxy Authentication works is very poor.
How it works is you need your third party authentication server to have the "[couch_httpd_auth] secret" and when a client authenticates, you need to generate a HMAC-SHA1 token by combining the username and secret. Then, on any http requests you make from the client to the CouchDB server, if you include all the headers:
X-Auth-CouchDB-Roles
X-Auth-CouchDB-UserName
X-Auth-CouchDB-Token
that request will be authenticated as a user client.
Also, it is not mentioned in the docs, but POST on the /_session API using these headers does nothing.
It's not the Proxy Authentication itself which is broken in CouchDB 2.0, it's just that in the current release there's no way to configure the authentication handlers like there was in the old 1.6 days.
There are some patches mentioned in the issue tracker which add proxy authentication to the list of authentication handlers. Furthermore there was a pull request which was accepted and merged which brings back configurability to CouchDB 2.0.
However in order to take advantage of those I'm afraid you either have to wait until the next release, or build CouchDB 2.0 yourself from the sources.
Proxy authentication is fixed as of CouchDB 2.1.1. The latest (>2.1.1) documentation shows how to configure proxy authentication again, along with the important proxy_use_secret option.
Here's what happens on the local server when application invokes HTTP request on local IIS.
request.Credentials = CredentialCache.DefaultNetworkCredentials;
request.PreAuthenticate = true;
request.KeepAlive = true;
When I execute the request, I can see the following series of HTTP calls in Fiddler:
Request without authorization header, results in 401 with WWW-Authenticate NTLM+Negotiate
Request with Authorization: Negotiate (Base64 string 1), results in 401 with WWW-Authenticate: Negotiate (Base64 string 2)
Request with Authorization: Negotiate (Base64 string 3), results in 401 with WWW-Authenticate: Negotiate (Base64 string 4)
Request with Authorization: Negotiate (Base64 string 3), results in 401 with WWW-Authenticate NTLM+Negotiate
Apparently the client and the server (both running on the same machine) are trying to handshake, but in the end authorization fails.
What is strange is that if I disable Windows authentication of the site and enable Basic authentication and send user/pwd explicitly, it all works. It also works if I use NTLM authentication and try to access the site from the browser specifying my credentials.
Well, after several hours of struggling I figured what the problem was. In order to be able to inspect network traffic in Fiddler I defined a Fiddler rule:
if (oSession.HostnameIs("MYAPP")) { oSession.host = "127.0.0.1"; }
Then I used "MYAPP" instead of "localhost" in the Web app reference, and Fiddler happily displayed all session information.
But server security was far less happy, so this alias basically broke challenge-response authentication on the local server. Once I replaced the alias with "localhost", it all worked.