I'm using the Ktor HttpClient(CIO) to make requests against an HTTP API whose response uses chunked transfer encoding.
Is there a way using the Ktor HttpClient(CIO) to get access to the individual Http Chunks in an HttpResponse, when calling an API that uses chunked transfer encoding?
I guess better late than never:
httpClient.prepareGet("http://localhost:8080/").execute {
val channel = it.bodyAsChannel()
while (!channel.isClosedForRead) {
val chunk = channel.readUTF8Line() ?: break
println(chunk)
}
}
Related
I am having a requirement in which i have to forward a request to different endpoint by adding some extra headers(usually OAuth tokens).
i tried below working one to proxying request.
fun proxy(request: ServerRequest, url:String, customHeaders: HttpHeaders = HttpHeaders.EMPTY): Mono<ServerResponse> {
val modifiedHeaders = getHeadersWithoutOrigin(request, customHeaders)
var webClient = clientBuilder.method(request.method()!!)
.uri(url)
modifiedHeaders.forEach{
val list = it.value.iterator().asSequence().toList()
val ar:Array<String> = list.toTypedArray()
webClient.header(it.key, *ar)
}
return webClient
.body(request.bodyToMono(), DataBuffer::class.java).exchange()
.flatMap { clientResponse ->
ServerResponse.status(clientResponse.statusCode())
.headers{
it.addAll(clientResponse.headers().asHttpHeaders())
}
.body(clientResponse.bodyToMono(), DataBuffer::class.java)
}
}
Incoming requests always hit one proxy endpoint at my server with target url in header. At server, i read target url and add OAuth tokens and forward request to target URL. In this scenario, i do not want to parse response body. Send the response as it is down stream.
What is the reactive way to do it?
I'm writing a Rest API that proxies binary images. I'm trying to use the support recently added to Spring WebFlux for Kotlin coroutines. In a controller I'm making a request to another service which returns a binary image and then stream that image to the calling client, in the response body. I'm using DataBuffer, which I understood would not load the entire response from the other service in memory. But I'm getting the following error:
Exceeded limit on max bytes to buffer : 262144
I've read posts on here that describe how to increase the buffer size, but that begs the question, "does DataBuffer load the entire response body in memeory?"
The binary images I'm dealing with could be GBs in size.
Here's the code to my controller.
#GetMapping("/test")
internal suspend fun proxy(
request: ServerHttpRequest,
#RequestParam("forwardUrl")
forwardUrl: String
): ResponseEntity<StreamingResponseBody> {
val stream = client.get()
.uri(forwardUrl)
.awaitExchange()
.awaitBody<DataBuffer>()
val responseBody: StreamingResponseBody = StreamingResponseBody { outputStream: OutputStream ->
Unit
stream.asInputStream().transferTo(outputStream)
}
return ResponseEntity<StreamingResponseBody>(responseBody, HttpStatus.OK)
}
What is the right way to achieve streaming the response body of a WebClient request to the response body of a request being handled by my controller, without loading the entire response body in memory?
Thanks.
I've made a REST API and I'd like to do a post request to one of the endpoints from my ESP8266, but I can't manage to do so.
The code inside the loop so far:
HTTPClient http; //Declare object of class HTTPClient
http.begin("http://localhost:5000/api/users/5b1e82fb8c620238a85646fc/arduinos/5b243dc666c18a2e10eb4097/data");
http.addHeader("Content-Type", "text/plain");
http.addHeader("Authorization", "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjViMWU4MmZiOGM2MjAyMzhhODU2NDZmYyIsImlhdCI6MTUyOTEwMTc5MiwiZXhwIjoxNTI5MTE2MTkyfQ.2O6knqriuFoEW9C2JQKRlM3D0DNnzqC7e7gpidy3pWU");
http.end();
The problem is that I don't know how to set the body of the request.
It should be a json with a single key called "value". For instance:
{
"value":101
}
Anyone knows how to do it? Also it's probable that I should use the ip instead of "localhost".
Thanks in advance.
Use ArduinoJson Library here. Then you can build your HTTP body.
StaticJsonBuffer<300> JSONbuffer; //Declaring static JSON buffer
JsonObject& JSONencoder = JSONbuffer.createObject();
JSONencoder["value"] = value_var;
char JSONmessageBuffer[300];
JSONencoder.prettyPrintTo(JSONmessageBuffer, sizeof(JSONmessageBuffer));
HTTPClient http; //Declare object of class HTTPClient
http.begin("API end point here"); //Specify request destination
http.addHeader("Content-Type", "application/json"); //Specify content-type header
int httpCode = http.POST(JSONmessageBuffer); //Send the request
String payload = http.getString(); //Get the response payload
Then use the above sample code to encapsulate JSON and send it to the API endpoint.
I'm using Spring 5, Netty and Spring webflux to develop and API Gateway. Sometime I want that the request should be stopped by the gateway but I also want to read the body of the request to log it for example and return an error to the client.
I try to do this in a WebFilter by subscribing to the body.
#Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
if (enabled) {
logger.debug("gateway is enabled. The Request is routed.");
return chain.filter(exchange);
} else {
logger.debug("gateway is disabled. A 404 error is returned.");
exchange.getRequest().getBody().subscribe();
exchange.getResponse().setStatusCode(HttpStatus.NOT_FOUND);
return exchange.getResponse().writeWith(Mono.just(exchange.getResponse().bufferFactory().allocateBuffer(0)));
}
}
When I do this it works when the content of the body is small. But when I have a large boby, only the first element of the flux is read so I can't have the entire body. Any idea how to do this ?
1.Add "readBody()" to the post route:
builder.routes()
.route("get_route", r -> r.path("/**")
.and().method("GET")
.filters(f -> f.filter(myFilter))
.uri(myUrl))
.route("post_route", r -> r.path("/**")
.and().method("POST")
.and().readBody(String.class, requestBody -> {return true;})
.filters(f -> f.filter(myFilter))
.uri(myUrl))
2.Then you can get the body string in your filter:
String body = exchange.getAttribute("cachedRequestBodyObject");
Advantages:
No blocking.
No need to refill the body for further process.
Works with Spring Boot 2.0.6.RELEASE + Sring Cloud Finchley.SR2 + Spring Cloud Gateway.
The problem here is that you are subscribing manually within the filter, which means you're disconnecting the reading of the request from the rest of the pipeline. Calling subscribe() gives you a Disposable that helps you manage the underlying Subscription.
So you need to turn connect the whole process as a single pipeline, a bit like:
Flux<DataBuffer> requestBody = exchange.getRequest().getBody();
// decode the request body as a Mono or a Flux
Mono<String> decodedBody = decodeBody(requestBody);
exchange.getResponse().setStatusCode(HttpStatus.NOT_FOUND);
return decodedBody.doOnNext(s -> logger.info(s))
.then(exchange.getResponse().setComplete());
Note that decoding the whole request body as a Mono means your gateway will have to buffer the whole request body in memory.
DataBuffer is, on purpose, a low level type. If you'd like to decode it (i.e. implement the sample decodeBodymethod) as a String, you can use one of the various Decoder implementations in Spring, like StringDecoder.
Now because this is a rather large and complex space, you can use and/or take a look at Spring Cloud Gateway, which does just that and way more.
We have a WCF REST service hosted on IIS 7 with .NET Framework 4.5. The client is sending data in GZip compressed format with request headers:
Content-Encoding:gzip
Content-Type: application/xml
But we are getting bad request from the server, if the request is in compressed format. We enabled Request compression by implementation of IHttpModule that will filter/modify incoming requests. From my understanding, this is failing because WCF uses original content length (that of compressed data) instead of Decompressed data. So here are my questions:
Is there any way we can fix this content length issues in IIS7/.NET 4.5? My HTTP module implementation is given below:
httpApplication.Request.Filter = New GZipStream(httpApplication.Request.Filter, CompressionMode.Decompress)`
If fixing the content length issue is not possible at server side, is there any way I can send original content length from client with a compressed request? Client side implementation is as follows:
using (Stream requeststream = serviceRequest.GetRequestStream())
{
if (useCompression)
{
using (GZipStream zipStream = new GZipStream(requeststream, CompressionMode.Compress))
{
zipStream.Write(bytes, 0, bytes.Length);
zipStream.Close();
requeststream.Close();
}
serviceRequest.Headers.Add("Content-Encoding", "gzip");
}
else
{
requeststream.Write(bytes, 0, bytes.Length);
requeststream.Close();
}
}
Check if this can work for you Compression and the Binary Encoder
MSDN: Choosing a Message Encoder