CSRF failure in custom mongoose pre-hook (Keystone.js) - keystonejs

using keystone LocalFile type to handle image uploads. similar to the Cloudinary autoCleanup option, I want to be able to delete the uploaded file itself, in addition to the corresponding mongo entry when deleting entries through the admin ui.
in this case, I want to delete an "Album", and it's corresponding album cover.
Album.schema.pre('remove', function(next){
var path = this._original.album_cover.path + "/" + this._original.album_cover.filename
fs.unlink(path, function () {
console.log('deleted');
})
I get "CSRF failure" when using the fs module. I thought all CSRF protection was handled internally with Keystone.
Anyone know of a better solution to this?

Took a 10 minute break and came back and it seems to be working now. I also found this, which seems to be the explanation.
"Moreover double check your session timeout. In my dev settings the session duration is set to 3 minutes. So, if I end up editing something for more than that time, Keystone will return a CSRF error on save because the new session (generate in the meantime) invalidates the old token."
https://github.com/keystonejs/keystone/issues/1330

Related

intermittent error from rally 'Not authorized to perform action: Invalid key' for POST request in chrome extension

I developed a chrome extension using Rally's WSAPI v2.0, and it basically does the following things:
get user and project, and store them
get current iteration everytime
send a post request to create a workitem
For the THIRD step, I sometimes get error ["Not authorized to perform action: Invalid key"] since end of last month.
[updated]Error can be reproduced everytime if I log in Rally website via SSO before using the extension to send requests via apikey.
What's the best practice to send subsequent requests via apikey in my extension since I can't control end users' habits?
I did see some similar posts but none of them is helpful... and in case it helps:
I'm adding ZSESSIONID:apikey in my request header, instead of user /
password to authenticate, so I believe no security token is needed
(https://comm.support.ca.com/kb/api-key-and-oauth-client-faq/kb000011568)
url starts with https://rally1.rallydev.com/slm/webservice/v2.0/
issue is fixed after clearing cookies for
https://rally1.rallydev.com/, but somehow it appears again some time
later
I checked the cookie when the issue was reproduced, and found one with name of ZSESSIONID and its value became something else rather than the apikey. Not sure if that matters though...
code for request:
function initXHR(method, url, apikey, cbFunc) {
let httpRequest = new XMLHttpRequest();
...
httpRequest.open(method, url);
httpRequest.setRequestHeader('Content-Type', ' application\/json');
httpRequest.setRequestHeader('Accept', ' application\/json');
httpRequest.setRequestHeader('ZSESSIONID', apikey);
httpRequest.onreadystatechange = function() {
...
};
return httpRequest;
}
...
usReq = initXHR ('POST', baseURL+'hierarchicalrequirement/create', apikey, function(){...});
Anyone has any idea / suggestion? Thanks a million!
I've seen this error when the API key had both read-only and full-access grants configured. I would start by making sure your key only has the full-access grant.

Receiving "Invalid policy document or request headers!"

I am attempting to upload a file to S3 following the examples provided in your documentation and source files. Unfortunately, I'm receiving the following errors when attempting an upload:
[Fine Uploader 5.3.2] Invalid policy document or request headers!
[Fine Uploader 5.3.2] Policy signing failed. Invalid policy document
or request headers!
I found a few posts on here with similar errors, but those solutions didn't help me.
Here is my jQuery:
<script>
$('#fine-uploader').fineUploaderS3({
request: {
endpoint: "http://mybucket.s3.amazonaws.com",
accessKey: "changeme"
},
signature: {
endpoint: "endpoint.php"
},
uploadSuccess: {
endpoint: "success.html"
},
template: 'qq-template'
});
</script>
(Please note that I changed the keys/bucket names for security sake.)
I used your endpoint-cors.php as a model and have included the portions that I modified here:
require 'assets/aws/aws-autoloader.php';
use Aws\S3\S3Client;
// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = $_ENV['changeme'];
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_ENV['AWS_SERVER_PUBLIC_KEY'];
$serverPrivateKey = $_ENV['AWS_SERVER_PRIVATE_KEY'];
// The following variables are used when validating the policy document
// sent by the uploader.
$expectedBucketName = $_ENV['mybucket'];
// $expectedMaxSize is the value you set the sizeLimit property of the
// validation option. We assume it is `null` here. If you are performing
// validation, then change this to match the integer value you specified
// otherwise your policy document will be invalid.
// http://docs.fineuploader.com/branch/develop/api/options.html#validation-option
$expectedMaxSize = (isset($_ENV['S3_MAX_FILE_SIZE']) ? $_ENV['S3_MAX_FILE_SIZE'] : null);
I also changed this:
// Only needed in cross-origin setups
function handleCorsRequest() {
// If you are relying on CORS, you will need to adjust the allowed domain here.
header('Access-Control-Allow-Origin: http://test.mydomain.com');
}
The POST seems to work:
POST http://test.mydomain.com/somepath/endpoint.php 200 OK
318ms
...but that's where the success ends.
I think part of the problem is that I'm not sure what to enter for "clientPrivateKey". Is that my "Secret Access Key" I set up with IAM?
And I'm definitely unclear on where I get the serverPublicKey and serverPrivateKey. Where am I generating a key-pair on the S3? I've combed through the docs, and perhaps I missed it.
Thank you in advance for your assistance!
First off, you are using endpoint-cors.php in a non-CORS environment. Communication between the browser and your endpoint appears to be same-origin, based on the URL of your signature endpoint. Switch to the endpoint.php example.
Regarding your questions about the keys, you should have create two distinct IAM users: one for client-side operations (heavily restricted) and one for server-side operations (an admin user). For each user, you'll have an access key (public) and a secret key (private). You always supply Fine Uploader with your client-side public key, and use your client-side private key to sign requests server-side. To perform other, more restricted operations (such as deleting files), you should use your server user keys.

Docpad : show error/success message on contact form

I added a route in my docpad.coffee file to handle form submissions, that I validate using the express-validator middleware. Now depending on the validation, I want to redirect the users to the same contact page but displaying either a success message when validation is successful (here I'll send an email), or display the error messages.
I didn't manage to pass the validation message to the template to display it. I tried almost all combinations of dynamic: true/false, res.locals = validationMessages, res.sessions = validationMessages, res.templateData = validationMessages with no success.
Furthermore, adding dynamic: true made the changes to the content not appear at all, whatever refresh strategy I use (private mode, cleaning cache, relaunching Docpad, refreshing without cache, etc.). I should probable file a bug about it.
How to ?
I'm using Docpad 6.53.0 (latest to date), node 0.10.15, on OS X 10.8.4
I cheated on this one a bit by appending a hash to the redirect url (eg: "www.mywebsite.com/#messagesent"). I then use client side javascript to read the hash and then show the appropriate message. Something like this:
if (location.hash == "#messagesent") {
$('#message-sent').show();
setTimeout(function () {
$('#message-sent').fadeOut(1000);
}, 1000);
}
Not quite what you were asking though :)

ASP.NET Web API - Reading querystring/formdata before each request

For reasons outlined here I need to review a set values from they querystring or formdata before each request (so I can perform some authentication). The keys are the same each time and should be present in each request, however they will be located in the querystring for GET requests, and in the formdata for POST and others
As this is for authentication purposes, this needs to run before the request; At the moment I am using a MessageHandler.
I can work out whether I should be reading the querystring or formdata based on the method, and when it's a GET I can read the querystring OK using Request.GetQueryNameValuePairs(); however the problem is reading the formdata when it's a POST.
I can get the formdata using Request.Content.ReadAsFormDataAsync(), however formdata can only be read once, and when I read it here it is no longer available for the request (i.e. my controller actions get null models)
What is the most appropriate way to consistently and non-intrusively read querystring and/or formdata from a request before it gets to the request logic?
Regarding your question of which place would be better, in this case i believe the AuthorizationFilters to be better than a message handler, but either way i see that the problem is related to reading the body multiple times.
After doing "Request.Content.ReadAsFormDataAsync()" in your message handler, Can you try doing the following?
Stream requestBufferedStream = Request.Content.ReadAsStreamAsync().Result;
requestBufferedStream.Position = 0; //resetting to 0 as ReadAsFormDataAsync might have read the entire stream and position would be at the end of the stream causing no bytes to be read during parameter binding and you are seeing null values.
note: The ability of a request's content to be read single time only or multiple times depends on the host's buffer policy. By default, the host's buffer policy is set as always Buffered. In this case, you will be able to reset the position back to 0. However, if you explicitly make the policy to be Streamed, then you cannot reset back to 0.
What about using ActionFilterAtrributes?
this code worked well for me
public HttpResponseMessage AddEditCheck(Check check)
{
var request= ((System.Web.HttpContextWrapper)Request.Properties.ToList<KeyValuePair<string, object>>().First().Value).Request;
var i = request.Form["txtCheckDate"];
return Request.CreateResponse(HttpStatusCode.Ok);
}

ExtJs 4 Store's AJAX proxy is not called on Store add — what is missing?

I have a Grid, a Store and Model for its data and AJAX proxy for the Store that is pointing to my self-written PHP back-end. The PHP backend writes to log each time it is called.
The system works OK for Read, Update and Delete calls. However now I need to add new field to Store, which I do in such a way:
(here, some new data were generated...)
var newEntry=Ext.ModelManager.create({
id:id,
title: title,
url: '/php/'+fname,
minithumb: '/php/'+small,
thumb:'/php/'+thumb
}, 'MyApp.model.fileListModel');
var store=Ext.getCmp('currGallery').getStore();
store.add(newEntry);
store.sync();
I have the new line appearing in the Grid.
But with or withour sync() call, I have no calls going to my PHP back end. It however reads one more time. Store has parameter autoSync :true and does great updating data automatically when I edit existing line in the Grid.
What am I missing?
Try not to set id when creating new record.
In fact I was missing a
newEntry.phantom = true;
flag. After I set it before adding to store, Store and its Proxy started to send data to server.
Maybe ID solution also works, dunno.