Apache Http Client Put Request Error - apache

I'm trying to upload a file using the Apache Http Client's PUT method. The code is as below;
def putFile(resource: String, file: File): (Int, String) = {
val httpClient = new DefaultHttpClient(connManager)
httpClient.getCredentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(un, pw))
val url = address + "/" + resource
val put = new HttpPut(url)
put.setEntity(new FileEntity(file, "application/xml"))
executeHttp(httpClient, put) match {
case Success(answer) => (answer.getStatusLine.getStatusCode, "Successfully uploaded file")
case Failure(e) => {
e.printStackTrace()
(-1, e.getMessage)
}
}
}
When I tried running the method, I get to see the following error:
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultResponseParser.parseHead(DefaultResponseParser.java:101)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:252)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:281)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:247)
at org.apache.http.impl.conn.AbstractClientConnAdapter.receiveResponseHeader(AbstractClientConnAdapter.java:219)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:298)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:633)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:454)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
I do not know what has gone wrong? I'm able to do GET requests, but PUT seems not to work! Any clues as to where I should look for?

Look on the server. If GET Works, but PUT does not, then you have to figure out the receiving end.
Also, you may want to write a simple HTML File that has a form with PUT Method in it to rule out your Java Part.
As a sidenode: Its technically possible that something in between stops the request from going through or the response reaching you. Best setup a dummy HTTP Server to do the testing against.
Maybe its also a timeout issue, so the server takes to long to process your PUT.

The connection you are trying to use is a stale connection and therefore the request is failing.
But why are you only seeing an error for the PUT request and you are not seeing it for the GET request?
If you check the DefaultHttpRequestRetryHandler class you will see that by default HttpClient attempts to automatically recover from I/O exceptions. The default auto-recovery mechanism is limited to just a few exceptions that are known to be safe.
HttpClient will make no attempt to recover from any logical or HTTP protocol errors (those derived from HttpException class).
HttpClient will automatically retry those methods that are assumed to be idempotent. Your GET request, but not your PUT request!!
HttpClient will automatically retry those methods that fail with a transport exception while the HTTP request is still being transmitted to the target server (i.e. the request has not been fully transmitted to the server).
This is why you don't notice any error with your GET request, because the retry mechanism handles it.
You should define a CustomHttpRequestRetryHandler extending the DefaultHttpRequestRetryHandler. Something like this:
public class CustomHttpRequestRetryHandler extends DefaultHttpRequestRetryHandler {
#Override
public boolean retryRequest(IOException exception, int executionCount, HttpContext context) {
if(exception instanceof NoHttpResponseException) {
return true;
}
return super.retryRequest(exception, executionCount, context);
}
}
Then just assign your CustomHttpRequestRetryHandler
final HttpClientBuilder httpClientBuilder = HttpClients.custom();
httpClientBuilder.setRetryHandler(new CustomHttpRequestRetryHandler());
And that's it, now your PUT request is handled by your new RetryHandler (like the GET was by the default one)

Related

use HttpObjectAggregator conditionally

Netty version: 4.0.37
I have a requirement to have a netty server which handles both simple JSON requests and also large file uploads. HttpObjectAggregator has a limit of 2 GB for a request size, so I would prefer to use the HttpUploadServer example available here.
So, I want the pipeline to conditionally change depending on the type of request coming in. If it's a POST request, and it's a Multipart type of request, I want the request to be handled by the Upload handler and I want to skip all the rest of the handlers. If not, I want it to pass through the HttpObjectAggregator and then be handled by the Default handler.
I thought of creating one single pipeline looking like this:
HttpRequestDecoder
HttpContentDecompressor
FileUploadHandler <--- My handler to handle file uploads
HttpObjectAggregator
DefaultHandler <---- My handler to handle normal requests, without file body
And inside the "FileUploadHandler", I added the if else logic like this:
private boolean uploadURL(HttpObject object) {
HttpRequest request = (HttpRequest) object;
boolean isMultipart = HttpPostRequestDecoder.isMultipart(request);
if (request.getMethod().equals(HttpMethod.POST) && isMultipart) {
// To be handled by file upload handler
return true;
}
return false;
}
public void channelRead0(ChannelHandlerContext channelHandlerContext,
HttpObject object) throws Exception {
if (!uploadURL(object)) {
ReferenceCountUtil.retain(object);
channelHandlerContext.fireChannelRead(object);
} else {
// Handle the File Upload
....
My objective was to make the UploadHandler "pass on" the message to HttpObjectAggregator IF it's anything other than a POST Multipart request with file body. However, this isn't working for a GET request as the request times out after sometime for lack of a response.
I don't entirely understand why this is happening, but my guess is that HttpObjectAggregator is not receiving the initial HttpRequest object from my UploadHandler at all? And that in turn, isn't delivering it to the Default Handler either.
Is my approach wrong? Is there a different way of handling this conditional routing, outside of my Upload Handler?
Can I have any handler before HttpObjectAggregator or should all custom/user handlers come AFTER the HttpObjectAggregator?
I did this by using a Decoder before HttpObjectAggregator. The pipeline looks like:
HttpRequestDecoder
HttpContentDecompressor
RequestURLDecoder <--- New decoder to route requests.
FileUploadHandler <--- My handler to handle file uploads
HttpObjectAggregator
DefaultHandler <---- My handler to handle normal requests, without file body
The new decoder looks at the request and if it's a POST multipart, dynamically modifies the pipeline to remove the Object aggregator and the default handler. If it's not, then it removes the file upload handler.
(list.add(ReferenceCountUtil.retain(object)) is very important!)

HTTPClient intermittently locking up server

I have a .NET Core 2.2 app which has a controller acting as a proxy to my APIs.
JS makes a fetch to the proxy, proxy forwards call onto API's and returns response.
I am experiencing intermittent lock ups on the proxy app when its awaiting the response from the HttpClient. When this happens it locks up the entire server. No more requests will be processed.
According to the logs of the API that is being proxied to it is returning fine.
To reproduce this is i have to make 100+ requests in a loop on the client through the proxy. Then i have to reload the page multiple times, reloading it whilst the 100 requests are in flight. It usually takes around 5 hits before things start slowing down.
The proxy will lock up waiting for an awaited request to resolve. Sometimes it comes back after a 4 - 5 second delay, other times after a minuet. Most of the time i haven't waited longer then 10 min before giving up and killing the proxy.
I've distilled the code down to the following block that will reproduce the issue.
I believe im following best practices, its async all the way down, im using IHttpClientFactory to enable sharing of HttpClient instances, im implementing using where i believe it is required.
The implementation was based on this: https://github.com/aspnet/AspLabs/tree/master/src/Proxy
I'm hoping im making a rather obvious mistake that others with more experience can pin point!
Any help would be greatly appreciated.
namespace Controllers
{
[Route("/proxy")]
public class ProxyController : Controller
{
private readonly IHttpClientFactory _factory;
public ProxyController(IHttpClientFactory factory)
{
_factory = factory ?? throw new ArgumentNullException(nameof(factory));
}
[HttpGet]
[Route("api")]
async public Task ProxyApi(CancellationToken requestAborted)
{
// Build API specific URI
var uri = new Uri("");
// Get headers frpm request
var headers = Request.Headers.ToDictionary(x => x.Key, y => y.Value);
headers.Add(HeaderNames.Authorization, $"Bearer {await HttpContext.GetTokenAsync("access_token")}");
// Build proxy request method. This is within a service
var message = new HttpRequestMessage();
foreach(var header in headers) {
message.Headers.Add(header.Key, header.Value.ToArray());
}
message.RequestUri = uri;
message.Headers.Host = uri.Authority;
message.Method = new HttpMethod(Request.Method);
requestAborted.ThrowIfCancellationRequested();
// Generate client and issue request
using(message)
using(var client = _factory.CreateClient())
// **Always hangs here when it does hang**
using(var result = await client.SendAsync(message, requestAborted).ConfigureAwait(false))
{
// Appy data from request onto response - Again this is within a service
Response.StatusCode = (int)result.StatusCode;
foreach (var header in result.Headers)
{
Response.Headers[header.Key] = header.Value.ToArray();
}
// SendAsync removes chunking from the response. This removes the header so it doesn't expect a chunked response.
Response.Headers.Remove("transfer-encoding");
requestAborted.ThrowIfCancellationRequested();
using (var responseStream = await result.Content.ReadAsStreamAsync())
{
await responseStream.CopyToAsync(responseStream, 81920);
}
}
}
}
}
EDIT
So modified the code to remove the usings and return the proxied response directly as a string instead of streaming and still getting the same issues.
When running netstat i do see a lot of logs for the url of the proxied API.
4 rows mention the IP of the API being proxied to, probably about another 20 rows mentions the IP of the proxy site. Those numbers dont seem odd to me but i don't have much experience using netstat (first time ive ever fired it up).
Also i have left the proxy running for about 20 min. Its it technically still alive. Responses are coming back. Just taking a very long time between the API being proxied to returning data and the HttpClient resolving. However it wont service any new requests, they just sit there hanging.

InversifyJS: dependency instantiation per HTTP Request

I'm using Inversify.JS in a project with Express. I would like to create a connection to a Neo4J Database, and this process has two objets:
The driver object - Could be shared across the application and created one time only
The session object - Each HTTP request should create a session against the driver, whose lifecyle is the same as the http request lifecycle (as long as the request ends, the connection is destroyed)
Without Insersify.JS, this problem is solved using a simple algorithm:
exports.getSession = function (context) { // 'context' is the http request
if(context.neo4jSession) {
return context.neo4jSession;
}
else {
context.neo4jSession = driver.session();
return context.neo4jSession;
}
};
(example: https://github.com/neo4j-examples/neo4j-movies-template/blob/master/api/neo4j/dbUtils.js#L13-L21)
To create a static dependency for the driver, I can inject a constant:
container.bind<DbDriver>("DbDriver").toConstantValue(new Neo4JDbDriver());
How can I create a dependency instantiated only once per request and retrieve them from the container?
I suspect I must invoke the container on a middleware like this:
this._express.use((request, response, next) => {
// get the container and create an instance of the Neo4JSession for the request lifecycle
next();
});
Thanks in advance.
I see two solutions to your problem.
use inRequestScope() for DbDriver dependency. (available since 4.5.0 version). It will work if you use single composition root for one http request. In other words you call container.get() only once per http request.
create child container, attach it to response.locals._container and register DbDriver as singleton.
let appContainer = new Container()
appContainer.bind(SomeDependencySymbol).to(SomeDependencyImpl);
function injectContainerMiddleware(request, response, next) {
let requestContainer = appContainer.createChildContainer();
requestContainer.bind<DbDriver>("DbDriver").toConstantValue(new Neo4JDbDriver());
response.locals._container = requestContainer;
next();
}
express.use(injectContainerMiddleware); //insert injectContainerMiddleware before any other request handler functions
In this example you can retrieve DbDriver from response.locals._container in any request handler/middleware function registered after injectContainerMiddleware and you will get the same instance of DbDriver
This will work, but I'm not sure how performant it is. Additionally I guess that you may need to somehow dispose requestContainer (unbind all dependencies and remove reference to parent container) after http request is done.

easynetQ delayed respond/request resulting in timeout

I've run into a problem with using the request/respond pattern of EasyNetQ while using it on our server (Windows Server 2008). Not able to reproduce it locally at the moment.
The setup is we have 2 windows services (running as console applications for testing) which are connected through the request/respond pattern in EasyNetQ. This has been working as expected until recently on the server where the request side does not "consume" the responses until after the request timeouts.
I have included 2 links to pastebin which contain the console logging of EasyNetQ which will hopefully make my problem a bit more clear.
RequestSide
RespondSide
Besides that, my request code looks like this:
var request = new foobar();
var response = _bus.Request<foobar, foobar2>(request);
and on the respond side:
var response = new response();
_bus.Respond<foobar, foobar2>(request =>
{
try
{
....
return response;
}
catch (Exception e)
{
....
return response;
}
});
As I've said, the request side sends the request as expected and the respond side consumes/catches it. This works as it should, but when the respond side is done processing and responds (which it does, the messages can be seen in the RabbitMQ management thingy) the request doesn't consume/catch the response until after the request has timed out (default timeout is 10s, tried setting to 60s aswell, makes no difference). This is also evident in the logs linked above as you'll see on the RequestSide, with the 5 or so messages received from the response queue which previously timed out.
I've tried using RespondAsync in case the processing was taking too long and messing something up, didn't help. Tried using both RespondAsync & RequestAsync, just messed everything up even more (I was probably doing something wrong with the request :)).
I might be missing something, but I'm not sure what to try from here.
EDIT: Noticed I messed something up. As well as added more context below:
The IBus used for the request/response is created and injected with Ninject:
class FooModule : NinjectModule
{
public override void Load()
{
Bind<IBus>().ToMethod(ctx => RabbitHutch.CreateBus("host=localhost", x => x.Register<IEasyNetQLogger>(_ => logger))).InSingletonScope();
}
}
And it's all tied together by the service being constructed using Topshelf with Ninject like so:
static void Main(string[] args)
{
HostFactory.Run(x =>
{
x.UseNinject(new FooModule());
x.Service<FooService>(s =>
{
s.ConstructUsingNinject();
s.WhenStarted((service, control) => service.Start(control));
s.WhenStopped((service, control) => service.Stop(control));
});
x.RunAsLocalSystem();
});
}
The Topshelf setup has all been tested pretty thoroughly and it works as intended, and should not really be relevant for the request/respond problem, but I thought I would provide a bit more context.
I had this same issue, my problem was i set the timeout only in the response but not in the request side, after i set the timeoute in both side it worked fine
my connection for eg.
host=hostname;timeout=120;virtualHost=myhost;username=myusername;passw
ord=mypassword

Camel aws-s3 not working

I am trying to create a camel route to transfer a file from an FTP server to an AWS S3 storage.
I have written the following route
private static class MyRouteBuilder extends RouteBuilder {
#Override
public void configure() throws Exception
{
from("sftp://<<ftp_server_name>>&noop=true&include=<<file_name>>...")
.process(new Processor(){
#Override
public void process(Exchange ex)
{
System.out.println("Hello");
}
})
.to("aws-s3://my-dev-bucket ?
accessKey=ABC***********&secretKey=12abc********+**********");
}
The issue is, this gives me the following exception:
Exception in thread "main" org.apache.camel.FailedToCreateRouteException: Failed to create route route1 at: >>> To[aws-s3://my-dev-bucket?accessKey=ABC*******************&secretKey=123abc******************** <<< in route: Route(route1)[[From[sftp://<<ftp-server>>... because of Failed to resolve endpoint: aws-s3://my-dev-bucket?accessKey=ABC***************&secretKey=123abc************** due to: The request signature we calculated does not match the signature you provided. Check your key and signing method.
I then tried to do this the other way. i.e.writing a method like this:
public void boot() throws Exception {
// create a Main instance
main = new Main();
// enable hangup support so you can press ctrl + c to terminate the JVM
main.enableHangupSupport();
// bind MyBean into the registery
main.bind("foo", new MyBean());
// add routes
AWSCredentials awsCredentials = new BasicAWSCredentials("ABC*****************", "123abc*************************");
AmazonS3 client = new AmazonS3Client(awsCredentials);
//main.bind("client", client);
main.addRouteBuilder(new MyRouteBuilder());
main.run();
}
and invoking using the bound variable #client. This approach does not give any exceptions, but the file transfer does not work.
To make sure that there's nothing wrong with my approach, I tried aws-sqs instead of aws-s3 and that works fine (file succesfully transfers to the SQS queue)
Any idea why this is happening? Is there some basic issue with "aws-s3" connector for camel?
Have you tried of using RAW() function to wrap as like RAW(secretkey or accesskey).
It will help you to pass your keys as it is without encoding.
Any plus signs in you secret key need to be url encoded as %2B, in your case **********+*********** becomes **********%2B***********
When you configure Camel endpoints using URIs then the parameter values gets url encoded by default.
This can be a problem when you want to configure passwords as is.
To do that you can tell Camel to use the raw value, by enclosing the value with RAW(value). See more details at How do I configure endpoints which has an example also.
See Camel Documentation
Your url should looks like:
aws-s3:bucketName?accessKey=RAW(XXXX)&secretKey=RAW(XXXX)