I'm writing a Rest API that proxies binary images. I'm trying to use the support recently added to Spring WebFlux for Kotlin coroutines. In a controller I'm making a request to another service which returns a binary image and then stream that image to the calling client, in the response body. I'm using DataBuffer, which I understood would not load the entire response from the other service in memory. But I'm getting the following error:
Exceeded limit on max bytes to buffer : 262144
I've read posts on here that describe how to increase the buffer size, but that begs the question, "does DataBuffer load the entire response body in memeory?"
The binary images I'm dealing with could be GBs in size.
Here's the code to my controller.
#GetMapping("/test")
internal suspend fun proxy(
request: ServerHttpRequest,
#RequestParam("forwardUrl")
forwardUrl: String
): ResponseEntity<StreamingResponseBody> {
val stream = client.get()
.uri(forwardUrl)
.awaitExchange()
.awaitBody<DataBuffer>()
val responseBody: StreamingResponseBody = StreamingResponseBody { outputStream: OutputStream ->
Unit
stream.asInputStream().transferTo(outputStream)
}
return ResponseEntity<StreamingResponseBody>(responseBody, HttpStatus.OK)
}
What is the right way to achieve streaming the response body of a WebClient request to the response body of a request being handled by my controller, without loading the entire response body in memory?
Thanks.
Related
I need to create an ASP.NET Core 3 Web API that understand this URL
http://myapp.com/MyASPNetCore3WebApi/myController/myWebMethod?user=A0001
and one zipfile which goes as a content. This is the code that calls the needed API, which I need to create:
HttpWebRequest httpWebRequest = (HttpWebRequest)WebRequest.Create(URI);
httpWebRequest.Timeout = -1;
httpWebRequest.KeepAlive = false;
httpWebRequest.Method = "POST";
httpWebRequest.ProtocolVersion = HttpVersion.Version10;
httpWebRequest.ContentType = "application/octet-stream";
httpWebRequest.Accept = "application/octet-stream";
httpWebRequest.ContentLength = data.Length;
Stream requestStream = httpWebRequest.GetRequestStream();
requestStream.Write(data, 0, data.Length);
requestStream.Close();
HttpWebResponse httpWebResponse = (HttpWebResponse)httpWebRequest.GetResponse();
The code above is working fine, it is used everyday, sending data to a java web service, now I am replacing that system for a new one in ASP.NET Core and I can't change the caller's code, that's why I need to create a Web API that understand that URL.
I have wrote this code in my Web API, but I guess I am missing something that I canĀ“t figure it out because I get an error ion the client (code above)
[HttpPost("myWebMethod")]
public FileStreamResult myWebMethod(string user, [FromBody] Stream compress)
{
byte[] zip = ((MemoryStream)compress).ToArray();
byte[] data = ZipHelper.Uncompress(zip);
.....................
}
The error I get in the client is this:-
[System.Net.WebException] {"The remote server returned an error: (415)
Unsupported Media Type."} System.Net.WebException
Thanks in advance for any help
If the goal is to read the raw request content, this can be done using HttpContext controller property. HttpContext has Request property that provides access to the actual HTTP request.
No additional model properties or controller arguments are needed to access raw request stream. It's important to note that FromBody and FromForm binding should not be used in this case.
There are couple notes regarding the code in the example from the original question.
byte[] zip = ((MemoryStream)compress).ToArray();
byte[] data = ZipHelper.Uncompress(zip);
The HttpContext.Request.Body property does not return MemoryStream, it returns its own implementation of a Stream. It means that there is no ToArray method.
When reading the entire content of a request directly into the server's memory, it is better to check the content length, otherwise the client can crash the server by sending a large enough request.
Using *Async methods when reading the content of the request will improve performance.
So I've created a custom filter that, when accessed, will create a webflux client and post to a predetermined url. This seems to work fine when running, but when testing this code the test is hanging (until I cancel the test). So I feel there is a possible memory leak on top of not being able to complete the test to make sure this route is working properly. If I switch the WebClient method to get() then a resulting test of the filter works fine. Something with a post() I am not sure what is missing.
#Component
class ProxyGatewayFilterFactory: AbstractGatewayFilterFactory<ProxyGatewayFilterFactory.Params>(Params::class.java) {
override fun apply(params: Params): GatewayFilter {
return OrderedGatewayFilter(
GatewayFilter { exchange, chain ->
exchange.request.mutate().header("test","test1").build()
WebClient.create().post()
.uri(params.proxyBasePath)
.body(BodyInserters.fromDataBuffers(exchange.request.body))
.headers { it.addAll(exchange.request.headers) }
.exchange()
.flatMap {
println("the POST statusCode is "+it.statusCode())
Mono.just(it.statusCode().is2xxSuccessful)
}
.map {
exchange.request.mutate().header("test", "test2").build()
println("exchange request uri is " + exchange.request.uri)
println("exchange response statusCode is "+ exchange.response.statusCode)
exchange
}
.flatMap(chain::filter)
}, params.order)
}
Taken from the documentation, if using exchange you have an obligation to consume the body.
Unlike retrieve(), when using exchange(), it is the responsibility of the application to consume any response content regardless of the scenario (success, error, unexpected data, etc). Not doing so can cause a memory leak. The Javadoc for ClientResponse lists all the available options for consuming the body. Generally prefer using retrieve() unless you have a good reason for using exchange() which does allow to check the response status and headers before deciding how to or if to consume the response.
Spring framework 5.2.9 Webclient
This api has been changed in the latest version of the spring framework 5.3.0 now spring will force you to consume the body, because developers didn't actually read the docs.
I'm using the Ktor HttpClient(CIO) to make requests against an HTTP API whose response uses chunked transfer encoding.
Is there a way using the Ktor HttpClient(CIO) to get access to the individual Http Chunks in an HttpResponse, when calling an API that uses chunked transfer encoding?
I guess better late than never:
httpClient.prepareGet("http://localhost:8080/").execute {
val channel = it.bodyAsChannel()
while (!channel.isClosedForRead) {
val chunk = channel.readUTF8Line() ?: break
println(chunk)
}
}
The title basically explains itself.
I have a REST endpoint with VertX. Upon hitting it, I have some logic which results in an AWS-S3 object.
My previous logic was not to upload to S3, but to save it locally. So, I can do this at the response routerCxt.response().sendFile(file_path...).
Now that the file is in S3, I have to download it locally before I could call the above code.
That is slow and inefficient. I would like to stream S3 object directly to the response object.
In Express, it's something like this. s3.getObject(params).createReadStream().pipe(res);.
I read a little bit, and saw that VertX has a class called Pump. But it is used by vertx.fileSystem() in the examples.
I am not sure how to plug the InputStream from S3'sgetObjectContent() to the vertx.fileSystem() to use Pump.
I am not even sure Pump is the correct way because I tried to use Pump to return a local file, and it didn't work.
router.get("/api/test_download").handler(rc -> {
rc.response().setChunked(true).endHandler(endHandlr -> rc.response().end());
vertx.fileSystem().open("/Users/EmptyFiles/empty.json", new OpenOptions(), ares -> {
AsyncFile file = ares.result();
Pump pump = Pump.pump(file, rc.response());
pump.start();
});
});
Is there any example for me to do that?
Thanks
It can be done if you use the Vert.x WebClient to communicate with S3 instead of the Amazon Java Client.
The WebClient can pipe the content to the HTTP server response:
webClient = WebClient.create(vertx, new WebClientOptions().setDefaultHost("s3-us-west-2.amazonaws.com"));
router.get("/api/test_download").handler(rc -> {
HttpServerResponse response = rc.response();
response.setChunked(true);
webClient.get("/my_bucket/test_download")
.as(BodyCodec.pipe(response))
.send(ar -> {
if (ar.failed()) {
rc.fail(ar.cause());
} else {
// Nothing to do the content has been sent to the client and response.end() called
}
});
});
The trick is to use the pipe body codec.
I'm using Spring 5, Netty and Spring webflux to develop and API Gateway. Sometime I want that the request should be stopped by the gateway but I also want to read the body of the request to log it for example and return an error to the client.
I try to do this in a WebFilter by subscribing to the body.
#Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
if (enabled) {
logger.debug("gateway is enabled. The Request is routed.");
return chain.filter(exchange);
} else {
logger.debug("gateway is disabled. A 404 error is returned.");
exchange.getRequest().getBody().subscribe();
exchange.getResponse().setStatusCode(HttpStatus.NOT_FOUND);
return exchange.getResponse().writeWith(Mono.just(exchange.getResponse().bufferFactory().allocateBuffer(0)));
}
}
When I do this it works when the content of the body is small. But when I have a large boby, only the first element of the flux is read so I can't have the entire body. Any idea how to do this ?
1.Add "readBody()" to the post route:
builder.routes()
.route("get_route", r -> r.path("/**")
.and().method("GET")
.filters(f -> f.filter(myFilter))
.uri(myUrl))
.route("post_route", r -> r.path("/**")
.and().method("POST")
.and().readBody(String.class, requestBody -> {return true;})
.filters(f -> f.filter(myFilter))
.uri(myUrl))
2.Then you can get the body string in your filter:
String body = exchange.getAttribute("cachedRequestBodyObject");
Advantages:
No blocking.
No need to refill the body for further process.
Works with Spring Boot 2.0.6.RELEASE + Sring Cloud Finchley.SR2 + Spring Cloud Gateway.
The problem here is that you are subscribing manually within the filter, which means you're disconnecting the reading of the request from the rest of the pipeline. Calling subscribe() gives you a Disposable that helps you manage the underlying Subscription.
So you need to turn connect the whole process as a single pipeline, a bit like:
Flux<DataBuffer> requestBody = exchange.getRequest().getBody();
// decode the request body as a Mono or a Flux
Mono<String> decodedBody = decodeBody(requestBody);
exchange.getResponse().setStatusCode(HttpStatus.NOT_FOUND);
return decodedBody.doOnNext(s -> logger.info(s))
.then(exchange.getResponse().setComplete());
Note that decoding the whole request body as a Mono means your gateway will have to buffer the whole request body in memory.
DataBuffer is, on purpose, a low level type. If you'd like to decode it (i.e. implement the sample decodeBodymethod) as a String, you can use one of the various Decoder implementations in Spring, like StringDecoder.
Now because this is a rather large and complex space, you can use and/or take a look at Spring Cloud Gateway, which does just that and way more.