how to mock long json response via dojo registry? - dojo

I'm trying to follow this article to mock some response.
I'm porting mocked data from existing mocking service. There are some really long json responses, such as:
"{\"Layout\":{\"Id\":\
.......
"Image1\":\"test.png\",\"Image2\":\"\",\"multi\":[\"test1\",\"test2\",\"test3\"]}}"
There are a few hundred lines in the "......". Is there a easy way of doing this? Can I load the response from a file when I register the mock response?

It should be trivial to set up a mock provider to respond with data from separate local static files. Just return the result of sending an XHR (with dojo/request/xhr) to the desired static resource.
The following example assumes you have a static JSON resource in a path relative to your mock provider module:
define([
'require',
'dojo/request/registry',
'dojo/request/xhr'
], function (require, registry, xhr) {
registry.register('/service', function (url, options) {
// Presuming you're already passing handleAs: 'json' in options anyway,
// you can just pass options to the xhr call as-is.
return xhr(require.toUrl('./data/sample.json'), options);
});
...
return registry;
});

Related

How to html encode Json response in ASP.NET Core?

I am looking into Stored Cross-site Scripting vulnerabilities that occur when the data provided by an attacker is saved on the server, and is then displayed upon subsequent requests without proper HTML escaping.
I have NET 5 ASP.NET Core application using MVC. The application is using jQuery and Telerik's ASP.NET Core library. Both use JSON data returned from the server.
The application has several action methods that query stored data in the database and return as JsonResult.
For example, the following action method
[HttpGet]
[Route("items/json/{id}")]
public async Task<ActionResult> GetName([FromRoute] int id)
{
var i = await _itemService.GetWorkItem(id);
return Json(new
{
ItemName = i.Name
});
}
and client side script shows the ItemName in html using jQuery
$.get(url)
.done(function (response, textStatus, jqXHR) {
$("#itemname").html(response);
})
Suppose a user has stored the name as <script>alert('evil');</script> then the code above will execute the evil script on client side.
The application is using Newtonsoft as default serializer. By default the response does not get Html encoded. The response from the server looks like
{"ItemName":"\u003Cscript\u003Ealert(\u0027evil\u0027);\u003C/script\u003E"}
Also setting default JsonSerializerSettings in Startup like below does not work the same way as the Html Encode.
var serializerSettings = new JsonSerializerSettings()
{
StringEscapeHandling = StringEscapeHandling.EscapeHtml
};
Is there any default way in ASP.NET Core (Net 5) to handle html encoding during JSON serialization?
I understand that there is WebUtility.HtmlEncode() and also HtmlEncoder class available which can be used to apply encoding selectively . I am looking for a solution to handle html encoding by default during the JSON serialization.
Is new System.Text.Json by default applies html encoding on property values?
UPDATE 1
The comments below suggest to configure NewtonsoftJson in startup.cs. Note that question is NOT how to configure newtonsoft globally but how to html encode property value during the serialization so client (Browser) wont execute the malicious script.
I have tried Newtonsoft.Json.StringEscapeHandling.EscapeHtml which did not work. The script still executes
public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews()
.AddNewtonsoftJson((options) =>
{
options.SerializerSettings.StringEscapeHandling = Newtonsoft.Json.StringEscapeHandling.EscapeHtml;
});
}
You have to use Newtonsoft.Json if you don't want to create tons of code for each quite simple case. This is working for me
[HttpGet]
public async Task<ActionResult> MyTest ()
{
return new JsonResult(new
{
ItemName = "<script> alert('evil');</script>"
});
}
and use response.itemName on client side
$("#itemname").html(response.itemName);
to use Newtonsoft.Json change your startup code to this
using Newtonsoft.Json.Serialization;
services.AddControllersWithViews()
.AddNewtonsoftJson(options =>
options.SerializerSettings.ContractResolver =
new CamelCasePropertyNamesContractResolver());

How to handle JWT authentication with RxDB?

I have a local RxDB database and I want to connect it with CouchDB. Everything seems to works fine except for authentication. I have no idea how to add it differently then inserting credentials in database url:
database.tasks.sync({
remote: `http://${username}:${pass}#127.0.0.1:5984/tododb`,
});
I would like to use JWT auth but can't find how to add a token to sync request. I found only some solutions for PouchDB (pouchdb-authentication plugin) but can't get it working with RxDB.
RxDB is tightly coupled with PouchDB and uses its sync implementation under the hood. To my understanding, the only way to add custom headers to a remote PouchDB instance (which is what is created for you when you pass a url as the remote argument in sync), is to intercept the HTTP request:
var db = new PouchDB('http://example.com/dbname', {
fetch: function (url, opts) {
opts.headers.set('X-Some-Special-Header', 'foo');
return PouchDB.fetch(url, opts);
}
});
PouchDB replication documentation (sync) also states that:
The remoteDB can either be a string or a PouchDB object. If you have a fetch override on a remote database, you will want to use PouchDB objects instead of strings, so that the options are used.
Luckily, RxDB's Rx.Collection.sync does not only accept an server url as the remote argument, but also another RxCollection or a PouchDB-instance.
RxDB even reexport the internally used PouchDB module, so you do not have to install PouchDB as a direct dependency.
import { ..., PouchDB } from 'rxdb';
// ...
const remotePouch = new PouchDB('http://27.0.0.1:5984/tododb', {
fetch: function (url, opts) {
opts.headers.set('Authorization', `Bearer ${getYourJWTToken()}`)
return PouchDB.fetch(url, opts);
}
})
database.tasks.sync({
remote: remotePouch,
});

Enabling binary media types breaks Option POST call (CORS) in AWS Lambda

New to AWS..
We have a .NET Core Microservice running on a serverless aws instance as lambda functions.
Our Controller looks like this
[Route("api/[controller]")]
[ApiController]
public class SomeController : ControllerBase
{
[HttpGet()]
[Route("getsomedoc")]
public async Task<IActionResult> GetSomeDoc()
{
byte[] content;
//UI needs this to process the document
var contentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
contentDisposition.FileName = "File Name";
Response.Headers[HeaderNames.ContentDisposition] = contentDisposition.ToString();
return File(content, "application/octet-stream");
}
[HttpPost()]
[Route("somepost")]
public async Task<IActionResult> SomePost()
{
return null;
}
}
URL's
{{URL}}/getsomedoc
{{URL}}/somepost
We have enabled 'Binary Media Types' in AWS package settings to / for the getsomedoc to work otherwise it was returning the byte array back instead of the file.
But this is breaking our 'somepost' call when UI is accessing the API using
Method: OPTIONS & Access-Control-Request-Method as POST
When we remove the binary media type the 'somepost' starts working.
Looking for suggestions as why this might be happening? and what can we add/remove from gateway to get this fixed.
Well we ended up resolving this in a strange way.
Added two gateways for the lambda
- on one of them have binary enabled
- Disabled on the other one.
For
getsomedoc - Using the one where binary media types are enabled
postsomedoc - Using the other one
Wish there was a better way!!
I have found this same behavior with my API. While looking everywhere for some help, I found a few things that address the issue:
Basically, this bug report says the problem is having CORS enabled while also using the generic Binary Media Type "*/*". Apparently the OPTIONS method gets confused by this. They discuss this in terms of using Serverless, but it should apply to using the console or other ways of interacting with AWS.
They link to a possible solution: you can modify the Integration Response of the OPTIONS method - change the Mapping Template's Content-Type to an actual binary media type, like image/jpeg. They say this allows you to leave the binary media type in Settings as "*/*". This is a little hacky, but at least it is something.
There also was this alternate suggestion in the issues section of this GitHub repo that is a little less hacky. You can set the content handling parameter of the OPTIONS Integration Request to "CONVERT_TO_TEXT"... but you can only do this via CloudFormation or the CLI (not via the console). This is also the recommended solution by some AWS Technicians.
Another possible workaround is to setup a custom Lambda function to handle the OPTIONS request, this way the API gateway may have the "*/*" Binary Media Type.
Create a new lambda function for handling OPTIONS requests:
exports.handler = async (event) => {
const response = {
statusCode: 200,
headers:{
'access-control-allow-origin':'*',
'Access-Control-Allow-Headers': 'access-control-allow-origin, content-type, access-control-allow-methods',
'Access-Control-Allow-Methods':"GET,POST,PUT,DELETE,OPTIONS"
},
body: JSON.stringify("OK")
};
return response;
};
In your API Gateway OPTION method, change the integration type from Mock to Lambda Function.
Make sure to check 'Use Lambda proxy integration'
Select the correct region and point to the created Lambda Function
This way any OPTIONS request made from the browser will trigger the Lambda function and return the custom response.
Be aware this solution might involve costs.

InversifyJS: dependency instantiation per HTTP Request

I'm using Inversify.JS in a project with Express. I would like to create a connection to a Neo4J Database, and this process has two objets:
The driver object - Could be shared across the application and created one time only
The session object - Each HTTP request should create a session against the driver, whose lifecyle is the same as the http request lifecycle (as long as the request ends, the connection is destroyed)
Without Insersify.JS, this problem is solved using a simple algorithm:
exports.getSession = function (context) { // 'context' is the http request
if(context.neo4jSession) {
return context.neo4jSession;
}
else {
context.neo4jSession = driver.session();
return context.neo4jSession;
}
};
(example: https://github.com/neo4j-examples/neo4j-movies-template/blob/master/api/neo4j/dbUtils.js#L13-L21)
To create a static dependency for the driver, I can inject a constant:
container.bind<DbDriver>("DbDriver").toConstantValue(new Neo4JDbDriver());
How can I create a dependency instantiated only once per request and retrieve them from the container?
I suspect I must invoke the container on a middleware like this:
this._express.use((request, response, next) => {
// get the container and create an instance of the Neo4JSession for the request lifecycle
next();
});
Thanks in advance.
I see two solutions to your problem.
use inRequestScope() for DbDriver dependency. (available since 4.5.0 version). It will work if you use single composition root for one http request. In other words you call container.get() only once per http request.
create child container, attach it to response.locals._container and register DbDriver as singleton.
let appContainer = new Container()
appContainer.bind(SomeDependencySymbol).to(SomeDependencyImpl);
function injectContainerMiddleware(request, response, next) {
let requestContainer = appContainer.createChildContainer();
requestContainer.bind<DbDriver>("DbDriver").toConstantValue(new Neo4JDbDriver());
response.locals._container = requestContainer;
next();
}
express.use(injectContainerMiddleware); //insert injectContainerMiddleware before any other request handler functions
In this example you can retrieve DbDriver from response.locals._container in any request handler/middleware function registered after injectContainerMiddleware and you will get the same instance of DbDriver
This will work, but I'm not sure how performant it is. Additionally I guess that you may need to somehow dispose requestContainer (unbind all dependencies and remove reference to parent container) after http request is done.

Setting the HTTP Accept header for JsonRestStore

I'm using JsonRestStore but would like to add a custom Accept header to it. What's the best way to go about this?
This is similar to how the dijit.layout.ContentPane allows you to affect the underlying XHR by setting ioArgs. So the question could be "what is JsonRestStore's ioArgs?"
I'm using declarative syntax, but would gladly like to see both methods...
(Please note: I'm not interested in hacking my way around this by modifying the base XHR.)
Your best bet is providing a custom service to JsonRestStore. The easiest way I found of doing this is building the service from dojox.rpc.Rest. In the constructor you can provide a function to create the request arguments for all XHR requests. E.g.
function getRequest(id, args) {
return {
url: '/service/' + id,
handleAs: 'json',
sync: false,
headers: {
Accept: 'your custom header'
}
}
}
var service = new dojo.rpc.Rest('/service/', true /*isJson*/,
undefined /*schema*/, getRequest);
var store = new dojox.data.JsonRestStore({ service: service });
This completely ignores the args parameter that can include sorting and range arguments to your service.
These links will provide more information:
Use Dojo's JsonRestStore with your REST services: IBM developerWorks article with a more advanced and customizable solution
RESTful JSON + Dojo Data: Sitepen post
dojox.rpc.Rest source file (look for service._getRequest)