Setting the HTTP Accept header for JsonRestStore - dojo

I'm using JsonRestStore but would like to add a custom Accept header to it. What's the best way to go about this?
This is similar to how the dijit.layout.ContentPane allows you to affect the underlying XHR by setting ioArgs. So the question could be "what is JsonRestStore's ioArgs?"
I'm using declarative syntax, but would gladly like to see both methods...
(Please note: I'm not interested in hacking my way around this by modifying the base XHR.)

Your best bet is providing a custom service to JsonRestStore. The easiest way I found of doing this is building the service from dojox.rpc.Rest. In the constructor you can provide a function to create the request arguments for all XHR requests. E.g.
function getRequest(id, args) {
return {
url: '/service/' + id,
handleAs: 'json',
sync: false,
headers: {
Accept: 'your custom header'
}
}
}
var service = new dojo.rpc.Rest('/service/', true /*isJson*/,
undefined /*schema*/, getRequest);
var store = new dojox.data.JsonRestStore({ service: service });
This completely ignores the args parameter that can include sorting and range arguments to your service.
These links will provide more information:
Use Dojo's JsonRestStore with your REST services: IBM developerWorks article with a more advanced and customizable solution
RESTful JSON + Dojo Data: Sitepen post
dojox.rpc.Rest source file (look for service._getRequest)

Related

How to pass request headers set using headers option for ServerSideEvent (If Platform is browser) in React js

I have got to know that, the new npm package #microsoft/signalR provided options to pass custom header to httpClient used to make SSE calls in javascript (using headers option in withUrl).
But found a difference in the code (git code) where I see, the same custom header isn't forwarded if the request is from Browser or WebWorker. If otherwise, it is forwareded (git code)
I would like to understand, is there a security reason for not forwarding the header? If yes, is there a way to get it working? i.e, set custom header when making HTTP requests if the transport type is SSE (ServerSentEvent).
The reason is because browsers do not support sending headers with EventSource
https://developer.mozilla.org/en-US/docs/Web/API/EventSource/EventSource
Answering my question for future readers.
I have got it working for my requirement, Where I need to pass custom headers to all the signalR calls irrespective of transport type, starting from the negotiate call.
I could be able to send the custom header using the headers option while creating hubConnectionBuilder.withUrl(url, options) (have given detailed answer here)
To the point:
For SSE, as mentioned by Brennan, we cant set the custom header with the Native EventSource constructor. But I have achieved it using the EventSource polyfill using this package (npm package)
Two points to note down, if you are using signalR and try to achieve the same as mine.
By default signalR uses native EventSource, but there is a property we can set in the same options parameter in withUrl.
Extend the polyfill constructor and add the custom headers.
import { EventSourcePolyfill } from 'event-source-polyfill';
function EventSourceWithCustomHeader(url, options) {
return new EventSourcePolyfill(url, {
...options,
headers: {
...options.headers,
"custom-header-name": "value"
}
});
}
const conn = new signalR.HubConnectionBuilder()
.withUrl("/chat", {
headers: {
"custom-header-name": "value"
},
EventSource: EventSourceWithCustomHeader,
})
.build();

Enabling binary media types breaks Option POST call (CORS) in AWS Lambda

New to AWS..
We have a .NET Core Microservice running on a serverless aws instance as lambda functions.
Our Controller looks like this
[Route("api/[controller]")]
[ApiController]
public class SomeController : ControllerBase
{
[HttpGet()]
[Route("getsomedoc")]
public async Task<IActionResult> GetSomeDoc()
{
byte[] content;
//UI needs this to process the document
var contentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
contentDisposition.FileName = "File Name";
Response.Headers[HeaderNames.ContentDisposition] = contentDisposition.ToString();
return File(content, "application/octet-stream");
}
[HttpPost()]
[Route("somepost")]
public async Task<IActionResult> SomePost()
{
return null;
}
}
URL's
{{URL}}/getsomedoc
{{URL}}/somepost
We have enabled 'Binary Media Types' in AWS package settings to / for the getsomedoc to work otherwise it was returning the byte array back instead of the file.
But this is breaking our 'somepost' call when UI is accessing the API using
Method: OPTIONS & Access-Control-Request-Method as POST
When we remove the binary media type the 'somepost' starts working.
Looking for suggestions as why this might be happening? and what can we add/remove from gateway to get this fixed.
Well we ended up resolving this in a strange way.
Added two gateways for the lambda
- on one of them have binary enabled
- Disabled on the other one.
For
getsomedoc - Using the one where binary media types are enabled
postsomedoc - Using the other one
Wish there was a better way!!
I have found this same behavior with my API. While looking everywhere for some help, I found a few things that address the issue:
Basically, this bug report says the problem is having CORS enabled while also using the generic Binary Media Type "*/*". Apparently the OPTIONS method gets confused by this. They discuss this in terms of using Serverless, but it should apply to using the console or other ways of interacting with AWS.
They link to a possible solution: you can modify the Integration Response of the OPTIONS method - change the Mapping Template's Content-Type to an actual binary media type, like image/jpeg. They say this allows you to leave the binary media type in Settings as "*/*". This is a little hacky, but at least it is something.
There also was this alternate suggestion in the issues section of this GitHub repo that is a little less hacky. You can set the content handling parameter of the OPTIONS Integration Request to "CONVERT_TO_TEXT"... but you can only do this via CloudFormation or the CLI (not via the console). This is also the recommended solution by some AWS Technicians.
Another possible workaround is to setup a custom Lambda function to handle the OPTIONS request, this way the API gateway may have the "*/*" Binary Media Type.
Create a new lambda function for handling OPTIONS requests:
exports.handler = async (event) => {
const response = {
statusCode: 200,
headers:{
'access-control-allow-origin':'*',
'Access-Control-Allow-Headers': 'access-control-allow-origin, content-type, access-control-allow-methods',
'Access-Control-Allow-Methods':"GET,POST,PUT,DELETE,OPTIONS"
},
body: JSON.stringify("OK")
};
return response;
};
In your API Gateway OPTION method, change the integration type from Mock to Lambda Function.
Make sure to check 'Use Lambda proxy integration'
Select the correct region and point to the created Lambda Function
This way any OPTIONS request made from the browser will trigger the Lambda function and return the custom response.
Be aware this solution might involve costs.

changing meteor restivus PUT to implement upsert

i'm using restivus with meteor and would like to change the PUT schemantic to an upsert.
// config rest endpoints
Restivus.configure({
useAuth: false,
prettyJson: false
});
Restivus.addCollection("sensor", {
excludedEndpoints: ['getAll','deleteAll','delete'],
defaultOptions: {},
});
how does one do this?
Right now, the only way to do this would be to provide a custom PUT endpoint on each collection route:
Restivus.addCollection(Sensors, {
excludedEndpoints: ['getAll','deleteAll','delete'],
endpoints: {
put: function () {
var entityIsUpdated = Sensors.upsert(this.urlParams.id, this.bodyParams);
if (entityIsUpdated) {
var entity = Sensors.findOne(this.urlParams.id);
return {status: "success", data: entity};
}
else {
return {
statusCode: 404,
body: {status: "fail", message: "Sensor not found"}
}
}
}
}
});
The goal with Restivus is to provide the best REST practices by default, and enough flexibility to allow the user to override it with custom behavior where they desire. The proper RESTful behavior of PUT is to completely replace the entity with a given ID. It should never generate a new entity (that's what POST is for). For collections, Restivus will only allow you to define a PUT on a specific entity. In your example, an endpoint is generated for PUT /api/sensors/:id. If you aren't doing the PUT by :id, then you should probably be using POST instead (there's no "right way" to do this in REST, but at least you can POST without requiring an :id).
It sounds like what you want is a way to override the default behavior of the collections endpoints. That is extremely doable, but it would help me if you would make a feature request via the Restivus GitHub Issues so I can better track it. You can literally copy and paste your question from here. I'll make sure I add a way for you to access the collection in the context of any collection endpoints you define.
Last, but certainly not least, I noticed you are using v0.6.0, which needs to be updated to 0.6.1 immediately to fix an existing bug which prevents you from adding existing collections or using any collections created in Restivus anywhere else. That wasn't the intended behavior, and an update has been released. Check out the docs for more on that.

How to configure Stormpath as middleware in Sails.js

What is the best way to implement the following code in sails.js v0.10.5? Should I be handling this with a policy, and if so, how? The init() function required by Stormpath requires Express (app) as an argument. Currently, I am using the following code in sails.config.http.js as custom middleware.
customMiddleware: function(app) {
var stormpathMiddleware = require('express-stormpath').init(app, {
apiKeyFile: '',
application: '',
secretKey: ''
});
app.use(stormpathMiddleware);
}
Yes, this is the preferred way of enabling custom Express middleware with Sails if it does more than just handling a request (as in your case, where .init requires app). For simpler cases where you want to implement custom middleware that just handles requests, you can add the handler to sails.config.http.middleware and also add the handler name to the sails.config.http.middleware.order array. See the commented out defaults in config/http.js for an example using myRequestLogger.
Also note that the $custom key in the sails.config.http.middleware.order array indicates where the customMiddleware code will be executed, so you can change the order if necessary.

WCF service contract to receive file upload from ExtJs front end

I looked through existing questions but could not find exact match. Maybe I'm missing something obvious, if so - please direct me in the proper place. Here is my problem:
I have ExtJs front and application that needs to upload binary data to the WCF back end. On the front end I have the following code (I use Ext.form.field.File control to let user select a file)
// Create a dummy form in the controller after user selected a file
var form = Ext.create('Ext.form.Panel', {
items: [ my_file_field ]
});
form.getForm().submit({
method: 'POST',
url: 'myservice.url',
...
});
On the backend I have the following contract:
namespace MyApp
{
[ServiceContract]
public interface ITransferService
{
[OperationContract]
[WebInvoke(UriTemplate = "UploadImage", Method = "POST")]
void SaveImage(Stream buffer);
}
}
It works 'fine' except one little thing: in the stream inside SaveImage() I get not only binary data from the file user selected but also bunch of headers and encoded fields:
------WebKitFormBoundary26wAkvwGTnAMELFM
Content-Disposition: form-data; name="ext-gen1654"; filename="photo.png"
Content-Type: application/octet-stream
..... binary data goes here ....
------WebKitFormBoundary26wAkvwGTnAMELFM--
What am I missing? How do I change the contract of the service so I get clean binary data?
I found this post: http://antscode.blogspot.com/2009/11/parsing-multipart-form-data-in-wcf.html
Does anybody know more simple solution? Not that proposed parsing logic is complicated, but I was under impression that this is a common task and there are should be easier way of doing it. I can't believe MSFT doesn't understand standard web pages POSTs in the WCF out-of-the-box...
This should return to you the files attached to the current request accessed but file key or index. Hope this helps!
Stream file = HttpContext.Current.Request.Files.Get(<file_key>).InputStream;