How do I tell dotNetRDF to request and accept data from a remote triplestore where the response is encoded using gzip?
Looking at the source code for the LoadGraph method of SparqlHttpProtocolConnector, it doesn't appear to me to have a mechanism for setting the Accept-Encoding header, nor am I seeing any logic that would process a Content-Encoding header.
I tried modifying LoadGraph to set Accept-Encoding, and the content then comes back with the right Content-Type and Content-Encoding, but the line of code that determines how to handle the response is
IRdfReader parser = MimeTypesHelper.GetParser(response.ContentType);
and GetParser doesn't have any logic that considers the Content-Encoding.
However, it seems like the pieces are present: there's certainly infrastructure in place to process a gzipped file.
Is there another way to do this that I'm missing, or would this be a new feature request?
Thanks.
You can extend SPARQLHttpProtocolConnector and then override the ApplyCustomRequestOptions method to apply the Accept-Encoding header.
Although the MimeTypesHelper does not discriminate on the Content-Encoding header of the response, you can instead use the HttpWebRequest.AutomaticDecompression Property to enable automatic decompression of the response stream. Again, this can be set in the ApplyCustomRequestOptions method.
So your extension class would be something like this:
public class CompressedSparqlHttpProtocolConnector : SparqlHttpProtocolConnector
{
// Define appropriate constructors with the parameters you need e.g.
public CompressedSparqlHttpProtocolConnector(Uri serviceUri)
: base(serviceUri) { }
protected override ApplyCustomRequestOptions(HttpWebRequest request)
{
// Request GZip encoded response, allow fallback to identity encoding
request.Headers[HttpRequestHeader.ContentEncoding] = "gzip;q=1.0, identity;q=0.5"
// Enable automatic decompression of the response
request.AutomaticDecompression = DecompressionMethods.GZip;
}
}
Related
Netty version: 4.0.37
I have a requirement to have a netty server which handles both simple JSON requests and also large file uploads. HttpObjectAggregator has a limit of 2 GB for a request size, so I would prefer to use the HttpUploadServer example available here.
So, I want the pipeline to conditionally change depending on the type of request coming in. If it's a POST request, and it's a Multipart type of request, I want the request to be handled by the Upload handler and I want to skip all the rest of the handlers. If not, I want it to pass through the HttpObjectAggregator and then be handled by the Default handler.
I thought of creating one single pipeline looking like this:
HttpRequestDecoder
HttpContentDecompressor
FileUploadHandler <--- My handler to handle file uploads
HttpObjectAggregator
DefaultHandler <---- My handler to handle normal requests, without file body
And inside the "FileUploadHandler", I added the if else logic like this:
private boolean uploadURL(HttpObject object) {
HttpRequest request = (HttpRequest) object;
boolean isMultipart = HttpPostRequestDecoder.isMultipart(request);
if (request.getMethod().equals(HttpMethod.POST) && isMultipart) {
// To be handled by file upload handler
return true;
}
return false;
}
public void channelRead0(ChannelHandlerContext channelHandlerContext,
HttpObject object) throws Exception {
if (!uploadURL(object)) {
ReferenceCountUtil.retain(object);
channelHandlerContext.fireChannelRead(object);
} else {
// Handle the File Upload
....
My objective was to make the UploadHandler "pass on" the message to HttpObjectAggregator IF it's anything other than a POST Multipart request with file body. However, this isn't working for a GET request as the request times out after sometime for lack of a response.
I don't entirely understand why this is happening, but my guess is that HttpObjectAggregator is not receiving the initial HttpRequest object from my UploadHandler at all? And that in turn, isn't delivering it to the Default Handler either.
Is my approach wrong? Is there a different way of handling this conditional routing, outside of my Upload Handler?
Can I have any handler before HttpObjectAggregator or should all custom/user handlers come AFTER the HttpObjectAggregator?
I did this by using a Decoder before HttpObjectAggregator. The pipeline looks like:
HttpRequestDecoder
HttpContentDecompressor
RequestURLDecoder <--- New decoder to route requests.
FileUploadHandler <--- My handler to handle file uploads
HttpObjectAggregator
DefaultHandler <---- My handler to handle normal requests, without file body
The new decoder looks at the request and if it's a POST multipart, dynamically modifies the pipeline to remove the Object aggregator and the default handler. If it's not, then it removes the file upload handler.
(list.add(ReferenceCountUtil.retain(object)) is very important!)
I need to process the raw request body in and MVC core controller that has route parameters
[HttpPut]
[Route("api/foo/{fooId}")]
public async Task Put(string fooId)
{
reader.Read(Request.Body).ToList();
await _store.Add("tm", "test", data);
}
but It seems like the model binder has already consumed the request stream by the time it gets to the controller.
If I remove the route parameters, I can then access the request stream (since the framework will no longer process the stream to look for parameters).
How can I specify both route parameters and be able to access Request Body without having to manually parse request URI etc.?
I have tried decorating my parameters with [FromRoute] but it had no effect.
Please note I cannot bind the request body to an object and have framework handle the binding, as I am expecting an extremely large payload that needs to be processed in chunks in a custom manner.
There are no other controller, no custom middle-ware, filters, serialzier, etc.
I do not need to process the body several times, only once
storing the stream in a temp memory or file stream is not an options, I simply want to process the request body directly.
How can I get the framework to bind paramters from Uri, QueryString, etc. but leave the request body to me?
Define this attribute in your code:
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class DisableFormValueModelBindingAttribute : Attribute, IResourceFilter
{
public void OnResourceExecuting(ResourceExecutingContext context)
{
var factories = context.ValueProviderFactories;
factories.RemoveType<FormValueProviderFactory>();
factories.RemoveType<JQueryFormValueProviderFactory>();
}
public void OnResourceExecuted(ResourceExecutedContext context)
{
}
}
If you're targeting .Net Core 3 you also need to add
factories.RemoveType<FormFileValueProviderFactory>();
Now decorate your action method with it:
[HttpPut]
[Route("api/foo/{fooId}")]
[DisableFormValueModelBinding]
public async Task Put(string fooId)
{
reader.Read(Request.Body).ToList();
await _store.Add("tm", "test", data);
}
The attribute works by removing Value Providers which will attempt to read the request body, leaving just those which supply values from the route or the query string.
HT #Tseng for the link Uploading large files with streaming which defines this attribute
As I suspected the root cause was MVC inspecting the request body in order to try to bind route parameters. This is how model binding works by default for any routes that are not parameter-less, as per documentation.
The framework however does this only when the request content type is not specified, or when it is form data (multipart or url-encoded I assume).
Changing my request content-type to any thing other than form data (e.g. application/json) I can get the framework to ignore the body unless specifically required (e.g. with a [FromBody] route parameter). This is an acceptable solution for my case since I am only interested accepting JSON payloads with content-type application/json.
Implementation of DisableFormValueModelBindingAttribute in Uploading large files with streaming pointed out by #Tseng seems to be a better approach however, so I will look into using that instead, for complete
I have a new WCF service that some existing clients need to be able to communicate with.
There are some clients that are incorrectly sending the SOAP request with a Content-Type header of 'text/xml; charset=us-ascii'. At the moment, the clients themselves can't be changed.
When they send a request, they get the error message:
HTTP/1.1 415 Cannot process the message because the content type
'text/xml; charset=us-ascii' was not the expected type 'text/xml;
charset=utf-8'
Is there any way to instruct the WCF service to ignore the charset of the Content-Type and assume utf-8?
This may be helpful for you, if not I will delete the answer. I think this can be used in your context, but am not quite sure since there are no other details.
A WebContentTypeMapper can be used to override the content type being sent. This is snipped of the code used in the WCF Sample sets. In this example it is taking in whatever content type that is sent in and mapping it to json, but you could adjust that to your needs.
If you don't have it already you can download the samples from WCF Samples
This sample is located in ..WF_WCF_Samples\WCF\Extensibility\Ajax\WebContentTypeMapper\CS
using System.ServiceModel.Channels;
namespace Microsoft.Samples.WebContentTypeMapper
{
public class JsonContentTypeMapper : System.ServiceModel.Channels.WebContentTypeMapper
{
public override WebContentFormat GetMessageFormatForContentType(string contentType)
{
if (contentType == "text/javascript")
{
return WebContentFormat.Json;
}
else
{
return WebContentFormat.Default;
}
}
}
}
I am using Restlet2.3 to run REST API test automation.
The new feature has a customer HTTP header to pass a token to the service.
Form headers = (Form)resource.getRequestAttributes().get("org.restlet.http.headers");
if (headers == null) {
headers = new Form();
resource.getRequestAttributes().put("org.restlet.http.headers", headers);
}
...
headers.add(key, value);
The code works. Now, the customer HTTP header is defined as "Authorization". The above code seems not passing the header properly. And this is not challengeScheme involved.
I tested this scenario on SoapUI and Postman. Both work.
Anyone knows that restlet support this?
In fact, you can't override standard headers like Authorization with Restlet when doing a request.
If you want to provide a security token, you could use this approach:
String pAccessToken = "some token";
ChallengeResponse challengeResponse = new ChallengeResponse(
new ChallengeScheme("", ""));
challengeResponse.setRawValue(pAccessToken);
clientResource.setChallengeResponse(challengeResponse);
This way you'll have only the token in the Authorization header (with a space at the beginning - so don't forget to trim the value).
I am developing a RestEasy client to connect to a 3rd party REST service which has defined its own custom media types. A made up example is
application/vnd.abc.thirdPartyThing-v1+json
Note the uppercase P in thirdParty.
I am using RESTEasy 3.0.11 for my client implementation. At the point where I make a POST call to the service my code looks like
Response response = target.request()
.post(Entity.<ThirdPartyThing>entity(
thing,
"application/vnd.abc.thirdPartyThing-v1+json"));
but RESTEasy sends to the server
Content-Type: application/vnd.abc.thirdpartything-v1+json
This is due to RESTEasy's MediaTypeHeaderDelegate class's toString() method, which lowercases the type and subtype MediaTypeHeaderDelegate. This should be correct, or at least unimportant, as RFC-1341 states that Content-Type values are case-insensitive - RFC-1341
Unfortunately the 3rd party service is checking the Content-Type in a case sensitive manner and so returning a 415 UNSUPPORTED MEDIA TYPE error. I've tested using curl which doesn't alter the content-type value and confirmed that it's a case issue. application/vnd.abc.thirdPartyThing-v1+json works, application/vnd.abc.thirdpartything-v1+json does not.
I'm in the process of raising a ticket, but in the meantime is there any way to override RESTEasy's default behaviour and send Content-Type headers without lowercasing the value?
Thanks for reading.
I could reproduce this behavior with RESTeasy 3.0.6.Final and would not expect it. Maybe you could check their JIRA if this has already been discussed or open an issue. I once had problems on the server side because a 2.x version of RESTeasy was checking the charset attribute of the Content-Type header case-sensitive. This was also changed.
You could solve this problem by a really ugly workaround: Overwrite the header again in a ClientRequestFilter.
public class ContentTypeFilter implements ClientRequestFilter {
private Map<String, String> contentTypes;
public ContentTypeFilter() {
contentTypes = new HashMap<>();
contentTypes.put("text/foo", "text/Foo");
}
#Override
public void filter(ClientRequestContext requestContext) throws IOException {
String contentType = requestContext.getHeaderString("Content-Type");
if (contentTypes.containsKey(contentType)) {
requestContext.getHeaders().putSingle("Content-Type", contentTypes.get(contentType));
}
}
}
Don't forget to register this Filter:
Client client = ClientBuilder.newClient().register(ContentTypeFilter.class);