My Application has 2 MessageBodyWriters:
MBW1 produces something/concrete; qs=0.6
MBW2 produces */*; qs=0.01
RestEasy correctly uses MBW1 for the following accept header values:
something/unsupported, */*
something/unsupported,*/*; q=1
something/unsupported;q=0.9,*/*;q=.9'
*/*
*/*; q=0.1
However it chooses MBW2 for the following accept-header value:
something/unsupported,*/*; q=0.99
I would like MBW2 to be chosen only when no other MBW can provide something acceptable to the client. However it seems that MBW2 is chosen even if the concrete type produced by MBW1 would be indeed acceptable by the client. I don't see the rationale that RestEsays returns something/concrete with 4 and 5 but not when the client adds an additional unsupported format (6). Is this an issue of the JAX-RS Spec or of RestEasy?
Following section 3.8 of the JAX-RS specification and assuming teh JAX-RS resource method and class have no #Produces annotation and that the two MessageBodyWriters support the class of returned entity object in step 2 we set
P = {"something/concrete; qs=0.6", "*/*; qs=0.01"}
and is step 4 we set
A = {"something/unsupported","*/*; q=0.99"}
by combining the values from P and A in step 5 we set
M = {"something/unsupported; qs=0.01", "something/concrete; q=0.99; qs=0.6", "*/*; q=0.99; qs=0.01"}
In step 7 M is sorted as follows, given that something/unsupported has an implicit q value of 1
M = {"something/unsupported; q=1; qs=0.01", "something/concrete; q=0.99; qs=0.6", "*/*; q=0.99; qs=0.01"}
In step 8 something/unsupported would be selected a MediaType of Response and consequentaly MBW2 would be chosen.
So it seems to be an issue of the JAX-RS spec as qs is not used to indicate quality and is only a tertiary sorting factor when the q-values are identical.
Here's a proposed changed to JAX-RS to allow for more flexible content negotiation: https://github.com/factsmission/jax-rs-spec/
Related
Hello people of StackOverflow,
I am currently working on a games engine using the Vulkan graphics API, in the past I was just setting anti-aliasing to the max it could be. However today I was trying to turn it off (to improve performance on weaker systems). To do this I tried to set the MSAA samples on my engine to VK_SAMPLE_COUNT_1_BIT however this produced the validation error:
Validation Error: [ VUID-VkSubpassDescription-pResolveAttachments-00848 ] Object 0: handle = 0x55aaa6e32828, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xfad6c3cb | ValidateCreateRenderPass(): Subpass 0 requests multisample resolve from attachment 0 which has VK_SAMPLE_COUNT_1_BIT. The Vulkan spec states: If pResolveAttachments is not NULL, for each resolve attachment that is not VK_ATTACHMENT_UNUSED, the corresponding color attachment must not have a sample count of VK_SAMPLE_COUNT_1_BIT (https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VUID-VkSubpassDescription-pResolveAttachments-00848)
I can work around this problem relatively easily so it isn't really an issue for me, however I was wondering why exactly this limit is put into place. If I want to set the MSAA samples to 1 why can't I?
Thanks,
sckzor
A sample count of 1 means "not a multisampled image". And if you're doing multisample resolve, resolving from a non-multisampled image doesn't make sense. Which is also why you can't use such images for any other things that expect a multisampled image (you can't use an MS-style sampler or texture function on them).
I've just started to implement binary protocol API to orientDB with C++. Current Version of used orientDB is "orientdb-community-2.2.29" with win 10 x64 and java 1.8. Since I've tried to query "select * from XXXX" on example DB serverside exceptions are thrown and no record is serialized to client. Here are the logs after successful connection and query:
2017-12-03 14:14:12:561 INFO {db=Site} /0:0:0:0:0:0:0:1:2520 - Writing bytes (4+0=4 bytes): null [OChannelBinaryServer]$ANSI{green {db=Site}} Error on unmarshalling record #73:0 (java.lang.NullPointerException)
java.lang.NullPointerException
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.getRecordBytes(ONetworkProtocolBinary.java:2894)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.writeRecord(ONetworkProtocolBinary.java:2907)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.writeIdentifiable(ONetworkProtocolBinary.java:2697)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.serializeValue(ONetworkProtocolBinary.java:1639)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.command(ONetworkProtocolBinary.java:1584)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.executeRequest(ONetworkProtocolBinary.java:660)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.sessionRequest(ONetworkProtocolBinary.java:394)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.execute(ONetworkProtocolBinary.java:217)
at com.orientechnologies.common.thread.OSoftThread.run(OSoftThread.java:81)
2017-12-03 14:14:12:561 WARNI {db=Site} Cannot serialize record: XXXX#73:0{Name:[2],IDs:[1]} v3 [ONetworkProtocolBinary]
Before writing the "null" bytes the recordID, position and record version is serialized and received on client side correctly, also querying from Studio or console works like a charm. I've tried to change the class - property to STRING or EMBEDDEDMAP with the same problem.
Thanks in advance for help :-)
Fortunately I found the mistake at my own: wrong SerializationImpl was configured. The correct configuration must be ORecordSerializerBinary and not ONetworkProtocolBinary.
I am stuck at a point where I am configuring the DCM module and the current parameter I am trying to configure DcmTimStrP2AdjustServer,
The requirement is P2CAN_SERVER_MAX = 25ms; P2STARCAN_SERVER_MAX = 5000ms;
Is DcmDspSessionP2ServerMax the same as P2CAN_SERVER_MAX? and if it is the same
What is the need for DcmTimStrP2AdjustServer and how do I find the best value for DcmTimStrP2AdjustServer.(The values all should be a multiple of DcmTaskTime which I find to be logical).
DcmTaskTime = 5ms;
I am following Autosar 4.0.3, using ETAS tool for configuring the parameters.
To fulfill your requirement, you need to configure respectively
DcmDspSessionP2ServerMax & DcmDspSessionP2StarServerMax for each session control in the DcmDspSessionRows at Dcm/DcmConfigSet/DcmDsp/DcmDspSession/.
i.e.
DcmDspSessionP2ServerMax 25
DcmDspSessionP2StarServerMax 5000
There is no DcmTimStrP2AdjustServer, but I guess you're referring to DcmTimStrP2ServerAdjust instead. DcmTimStrP2ServerAdjust & DcmTimStrP2StarServerAdjust should be configured to a multiple of your DcmTaskTime (5ms in your case, so i.e. 5ms, 10ms, 15, ms, ... is applicable) and are used to safeguard that the response is available on the bus before triggering the P2 or P2* timeouts. In your case you may want to set these values to the same values as in the DcmDspSessionRows if there is no other specification given, because the chosen timeout values there are already multiples of your DcmTaskTime:
DcmTimStrP2ServerAdjust 25
DcmTimStrP2StarServerAdjust 5000
The adjust value is an internal value, in order to adjust the delay between the Dcm Transmit Request and the message being actually on the Bus.
The definition of P2ServerMax and P2*ServerMax and their corresponding Adjust values is the same:
This parameter is used to guarantee that the diagnostic response is available on the bus before reaching P2 by adjusting the current DcmDspSessionP2ServerMax. This parameter mainly represents the software architecture dependent communication delay between the time the transmission is initiated by DCM and the time when the message is actually transmitted to the bus
I am implementing a module that supports 206 partial requests.
After reading RFC rfc2616, i noticed that when receiving a multi-range request, overlapping ranges such as: "a-b, a-d" are not allowed.
My question is:
What happens with single-range request and overlapping bytes?
Request #1: a-b
Request #2: a-d
Do I need to ignore bytes a-b in the second request?
OR
Do I need to overwrite the bytes?
Thanks
Overwrite the bytes.
Responses does not depend on any previous request semantically because HTTP is a stateless protocol.
Following up on this GWT-RPC question (and answer #1) re. field size checking, I would like to know the right way to check pre-deserialization for max data size sent to server, something like if request data size > X then abort the request. Valuing simplicity and based on answer on aforementioned question/answer, I am inclined to believe checking for max overall request size would suffice, finer grained checks (i.e., field level checks) could be deferred to post-deserialization, but I am open to any best-practice suggestion.
Tech stack of interest: GWT-RPC client-server communication with Apache-Tomcat front-end web-server.
I suppose a first step would be to globally limit the size of any request (LimitRequestBody in httpd.conf or/and others?).
Are there finer-grained checks like something that can be set per RPC request? If so where, how? How much security value do finer grain checks bring over one global setting?
To frame the question more specifically with an example, let's suppose we have the two following RPC request signatures on the same servlet:
public void rpc1(A a, B b) throws MyException;
public void rpc2(C c, D d) throws MyException;
Suppose I approximately know the following max sizes:
a: 10 kB
b: 40 kB
c: 1 M B
d: 1 kB
Then I expect the following max sizes:
rpc1: 50 kB
rpc2: 1 MB
In the context of this example, my questions are:
Where/how to configure the max size of any request -- i.e., 1 MB in my above example? I believe it is LimitRequestBody in httpd.conf but not 100% sure whether it is the only parameter for this purpose.
If possible, where/how to configure max size per servlet -- i.e., max size of any rpc in my servlet is 1 MB?
If possible, where/how to configure/check max size per rpc request -- i.e., max rpc1 size is 50 kB and max rpc2 size is 1 MB?
If possible, where/how to configure/check max size per rpc request argument -- i.e., a is 10 kB, b is 40 kB, c is 1 MB, and d is 1 kB. I suspect it makes practical sense to do post-deserialization, doesn't it?
For practical purposes based of cost/benefit, what level of pre-deserialization checking is generally recommended -- 1. global, 2. servlet, 3. rpc, 4. object-argument? Stated differently, what is roughly the cost-complexity on one hand and the added value on the other hand of each of the above pre-deserialization level checks?
Thanks much in advance.
Based on what I have learned since I asked the question, my own answer and strategy until someone can show me better is:
First line of defense and check is Apache's LimitRequestBody set in httpd.conf. It is the overall max for all rpc calls across all servlets.
Second line of defense is servlet pre-deserialization by overriding GWT AbstractRemoteServiceServlet.readContent. For instance, one could do it as shown further below I suppose. This was the heart of what I was fishing for in this question.
Then one can further check each rpc call argument post-deserialization. One could conveniently use the JSR 303 validation both on the server and client side -- see references StackOverflow and gwt r.e. client side.
Example on how to override AbstractRemoteServiceServlet.readContent:
#Override
protected String readContent(HttpServletRequest request) throws ServletException, IOException
{
final int contentLength = request.getContentLength();
// _maxRequestSize should be large enough to be applicable to all rpc calls within this servlet.
if (contentLength > _maxRequestSize)
throw new IOException("Request too large");
final String requestPayload = super.readContent(request);
return requestPayload;
}
See this question in case the max request size if > 2GB.
From a security perspective, this strategy seems quite reasonable to me to control the size of data users send to server.