my jax-rs server limited my response - jax-rs

I have a cxf jax-rs server. My responses are json. Some of my responses are a little large (more than 1MB). when I have large responses(more than 60kB) my server only transmit 32kB of response. I want to know if jax-rs server has a maximum size for json responses. I want to increase this limit.

Related

Streaming & HTTP Clients (General Industry Question)

Apologies if this question is rather basic, but do most HTTP clients support streaming requests and responses by default?
We are onboarding an API management solution that enforces a max 10MB payload size unless streaming is enabled (Apigee). Our existing API framework allows for requests/responses with large JSON payloads, so I'm trying to assess the potential impact on our current API consumers to plan for change management. In general, what are some things to consider?

What is the extra 5 bytes in front of the grpc request and response

I am using tensorflow-serving for deep learning model server, it is a grpc serivce. And in order to track the server's requests and responses, there is a proxy in the middle of the server and client. The proxy will record the whole http level requests and responses.
The (request, response) tuple need some way to be human readable. So I need to translate the grpc request and response to json format. As I have the *.proto files it looks not so hard. But after some tests, I found that the grpc request and response body shows 5 (different) extra bytes data in front of the whole body.
// bytes in the grpc response:
\x00\x00\x00\x00c\nA\n\x07Softmax\x126\x08\x01\x12\x08\x12\x02\x08\x01\x12\x02\x08\n*(\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x12\x1e\n\x07default\x12\x02\x08\x01\x1a\x0fserving_default
// bytes in the raw .pb format:
\nA\n\x07Softmax\x126\x08\x01\x12\x08\x12\x02\x08\x01\x12\x02\x08\n*(\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x12\x1e\n\x07default\x12\x02\x08\x01\x1a\x0fserving_default
You can see there are extra five bytes \x00\x00\x00\x00c there. So...what is this mean? Does all the grpc requests and responses have this extra? Or is there some better way to parse grpc contents and tranlate into some human readable structure?
gRPC has a 5 byte header. Search for Length-Prefixed-Message in https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md.

Rest response getting truncated in WAS and IBM HTTP Server 8.5

We have built REST web service and deployed on Websphere application server and IBM HTTP Web server 8.5.
What is happening that for some of the POST requests where we have quite large response (more than 64 KB), we are not getting the complete response data.
Application has generated good JSON but JSON is getting truncated in between somewhere. Same request being fired multiple times but response is getting truncated for few requests randomly.
Our analysis says that whenever we get this truncated response, we get the response as multiple of 32KB i.e. actual response size can be of say 105KB but we get only 64KB or 96KB of response.
Any idea what can be the reason? Any configuration which can help us resolve the issue?
Thanks
Narinder
You may want to increase the size of the Write buffer on Web Container to stop it chunking the writes on multiple threads. The default size of the writer buffer is 32K, which does correspond to the multiple you are seeing.
To change this setting :
Application servers > -serverName- > Ports > Transport Chain > HttpQueueInboundDefault
click on the Web Container and set he Write Buffer size to an appropriate value. In most cases you want to set the buffer to be able to write all(or most) responses in one single write rather than multiple writes.
See also WebSphere Application Server 8.5 tuning

What is the purpose of the MaxReceivedMessageSize on the client-side?

I did a test against a WCF server where the response from the server exceeds the MaxRecievedMessageSize property defined in the client-side binding object, resulting in a CommunicationException. I examined request and response using Fiddler. Despite exceeding the MaxRecievedMessageSize, the entirety of the response is sent to the client.
I believe I am missing the point of this behavior. As I see it, no bandwidth is saved as the data has already been received. The client application could have processed the data but the client binding has discarded before it is given to the application.
If saving bandwidth is not the purpose of the MaxReceivedMessageSize on the client-side, what is it for?
The answer is simple: security.
It would indeed be better for the bandwidth if your client could say to the server: "oh, by the way, don't bother sending me replies bigger than X bytes", but that is something they didn't implement :-)
And even if it was, what if the server has a bug, or is intentionally misbehaving...
What if the server returned a 2 TB string? Your client would then try to allocate a 2TB buffer to receive the request and will probably get a OutOfMemoryException. That would bring your client down.

Inefficient transmission of protobuf-net serialized messages

I have a WCF service hosted on Windows Server 2008/IIS exposing a nettcpbinding endpoint using the DataContractSerializer. This service is consumed by a Windows Forms application, which is suffering from various performance issues. In an effort to reduce the payload transferred from server-to-client over our corporate network, I decided to integrate the protobuf-net (version r580) serialization engine into some of my service's operations using the ProtoBehavior attribute.
Prior to integration ofprotobuff-net, the cumulative size of the serialized server responses was approximately 18 MB. Afterwards, it was 1.6 MB, as verified with WCF trace logs on both client and server; unfortunately, this didn't result in decreased loading times in the client application.
After digging into it further, I found that the bytes received by the client over the network, pre-protobuf vs. post-protobuf, only differed by about 1MB, as reported by a network traffic tool. How can this be? How can payloads differing by almost 15 MB in their serialized form (comprising of several messages), only represent a 1 MB difference when transmitted over the network? Could the resulting TCP stream be overly bloated if the underlying protobuff stream is assembled a certain way?
Additionally, I should note that the protobuf-net serialized payload of 1.6 MB is comprised of several response messages, one of which is approximately 1.25 MB by itself; could this be the issue? Should I work on breaking that into smaller responses? If so, what's the threshold?
I'd appreciate any input regarding this as it's been puzzling me for a couple of weeks now. I've spent hours pouring through posts relating to protobuf-net, and while it's delivering on it's promise of providing a compact serialization format, I haven't been able to realize the benefits in practice.
Thanks in advance.