I'm attempting to use the 7.2.0 Modern File Field to upload multiple files.
var txtSiteCode = Ext.create('Ext.field.Text', {
label: 'Site Code',
required: true,
name: 'site_code',
responsiveConfig: {
'width < 800': {
labelAlign: 'top'
},
'width >= 800': {
labelAlign: 'left'
}
}
})
}
}
}), fldFile = Ext.create('Ext.field.File', {
label: 'Images',
name: 'files',
multiple: true,
accept: 'image',
responsiveConfig: {
'width < 800': {
labelAlign: 'top'
},
'width >= 800': {
labelAlign: 'left'
}
},
}), form = Ext.create('Ext.form.Panel', {
url: 'data/test.php',
method: 'post',
enctype: 'multipart/form-data',
items : [txtSiteCode, fldFile],
})
When the form is submitted, I only receive 1 file on the server in the $_FILES array. However judging from the size of the request payload, it appears that all files should in fact be available.
FILE COUNT: 1, referer:
[Thu Aug 20 09:05:09.715473 2020] [php7:notice] [pid 12749] Array\n(\n [name] => FBK90084_Primary_14_PE4 (2).jpg\n [type] => image/jpeg\n [tmp_name] => /tmp/phpxjd0pK\n [error] => 0\n [size] => 2973487\n)\n,
The above is a count on the $_FILE array size, and a dump of the $_FILE array itself. This was the result of 3 jpg files being selected in the file picker.
Status200
OK
VersionHTTP/1.1
Transferred5.67 MB (0 B size)
Referrer Policyno-referrer-when-downgrade
Above is the server response as you can see the size is 5.7mb (2 ~3mb files were selected in this case)
Any idea why the $_FILES array is not showing me all files submitted?
UPDATE: Something to note - if I go and manually set the name property of the field on the rendered page in devtools to include '[]' at the end of it, I can get multiple files to upload to the server. The issue then seems to be that the form is not treating the file input as a multiple upload and using the proper 'name[]' convention, and EXT strips non alpha numeric characters from name strings, so I cannot seem to set the field name attribute to 'files[]' in code. I'm guessing I can probably override this? Just unsure how
If you are doubt about the files are really sent as your expectation or not, I suggest you to try to separate the problems become two main problems. They are Frontend (ext js) problem and Backend ( your PHP script in receiving the files). I can show you how to make us sure that frontend(sencha ext js) really send multiple files or not by using change's listener inside the filefield config.
Ext.create('Ext.field.File', {
label: 'Images',
name: 'files',
multiple: true,
accept: 'image',
responsiveConfig: {
'width < 800': {
labelAlign: 'top'
},
'width >= 800': {
labelAlign: 'left'
}
},
listeners:{
change: function(el,v){
let files = el.getFiles(); //get all files
files.forEach(file=>{
console.log(file); //check your console, you will be shown the object of a file
});
}
}
}
Another way to check whether the files are sent in multiple key form params as 'name[]' or just single form param as'name' (but contains multiple files), just check the network tool in your browser after you submit to backend, it will show you the answer.
Related
I'm trying to upload files with dto object, but got error:
[Nest] 8296 - 11/29/2021, 1:05:08 AM [ExceptionsHandler] request entity too large +378162ms
PayloadTooLargeError: request entity too large
Here is my code:
#Post()
#UseInterceptors(
FileFieldsInterceptor([
{ name: 'picture', maxCount: 1 },
{ name: 'audio', maxCount: 1 },
]),
)
create(#UploadedFiles() files, #Body() dto: CreateTrackDto) {
console.log('files', files);
return this.trackService.create(dto, '', '');
}
I've tried to upload files without dto object and it works fine, but when I added the second param as dto object from the body I got this.
Actually tried set limits for uploaded files in main.ts file like this:
// app.use(json({ limit: '50mb' }));
// app.use(urlencoded({ extended: true, limit: '50mb' }));
and set limits in FileFieldsInterceptor in localOptions object
but got the same error.
Does anyone know how to fix it?
"#nestjs/common": "^7.6.15",
"#nestjs/core": "^7.6.15",
"#nestjs/mongoose": "^9.0.1",
"#nestjs/platform-express": "^7.6.15"
"#types/multer": "^1.4.7"
windows 10
I'm working on server-side speech to text transcription, and I was able to get a post endpoint working for smaller files, but anything larger than ~7Mb throws an error that doesn't make much sense to me.
I know this code works for small files, and that the callback URL is registered correctly. I've tried throwing the "readableTranscriptStream" creation in a setTimeout() function, just to make sure it wasn't an issue of the full file not being passed to the createJob call. I've also tried just passing req.file.buffer as an argument for the audio param. I also know it isn't an issue of the encoding of the files being incorrect, as I've used the same audio file, slowly increasing the length and size of the file until it threw this error, and it worked until I hit about 7.2Mb, or ~3 min of audio in a .wav 16-bit encoded file.
I've also tried this with the fs.createFileStream('./local_test.wav') as an argument and gotten the same error back, though when I tried that, the _requestBodyLength field in the error was 10485760, and the _requestBodyBuffers was an array of objects. I realize this 10485760 is the same as the maxBodyLength, but the docs for the API say that "You can pass a maximum of 1 GB and a minimum of 100 bytes of audio with a request", and the test audio was, again ~7.2 Mb.
username: process.env.wastonUsername,
password: process.env.watsonPassword,
url: 'https://stream.watsonplatform.net/speech-to-text/api/'
});
const storage = multer.memoryStorage();
const upload = multer({ storage: storage , limit: {fields: 1, fileSize: 209715200, files:1, parts:2}});
upload.single('track')(req,res, (err) => {
req.setTimeout(0);
if (err) {
console.log(err);
return res.status(400).json({ message: err })
}
const registerCallbackParams = {
callback_url: <my_callback_url>,
user_secret: "Test"
};
const readableTranscriptStream = new Readable();
readableTranscriptStream.push(req.file.buffer);
readableTranscriptStream.push(null);
const createJobParams = {
audio: readableTranscriptStream,
callback_url: <my_callback_url>,
content_type: req.file.mimetype,
events:"recognitions.completed_with_results",
inactivity_timeout: -1
};
speechToText.createJob(createJobParams)
.then(recognitionJob => {
console.log(recognitionJob);
})
.catch(err => {
console.log('error:', err);
});
})
The error I'm getting back is :
error:{
Error: Response not received. Body of error is HTTP ClientRequest object
at formatError (/app/node_modules/ibm-cloud-sdk-core/lib/requestwrapper.js:111:17 )
at /app/node_modules/ibm-cloud-sdk-core/lib/requestwrapper.js:259:19 at process._tickCallback (internal/process/next_tick.js:68:7 )
message:'Response not received. Body of error is HTTP ClientRequest object',
body:Writable {
_writableState:WritableState {
objectMode:false,
highWaterMark:16384,
finalCalled:false,
needDrain:false,
ending:false,
ended:false,
finished:false,
destroyed:false,
decodeStrings:true,
defaultEncoding:'utf8',
length:0,
writing:false,
corked:0,
sync:true,
bufferProcessing:false,
onwrite:[
Function:bound onwrite
],
writecb:null,
writelen:0,
bufferedRequest:null,
lastBufferedRequest:null,
pendingcb:0,
prefinished:false,
errorEmitted:false,
emitClose:true,
bufferedRequestCount:0,
corkedRequestsFree:[
Object
]
},
writable:true,
_events:[
Object:null prototype
] {
response:[
Function:handleResponse
],
error:[
Function:handleRequestError
]
},
_eventsCount:2,
_maxListeners:undefined,
_options:{
maxRedirects:21,
maxBodyLength:10485760,
protocol:'https:',
path:'/speech-to-text/api/v1/recognitions?callback_url=<my_callback_url>&events=recognitions.completed_with_results&inactivity_timeout=-1',
method:'post',
headers:[
Object
],
agent:[
Agent
],
auth:undefined,
hostname:'stream.watsonplatform.net',
port:null,
nativeProtocols:[
Object
],
pathname:'/speech-to-text/api/v1/recognitions',
search:'?callback_url=<my_callback_url>&events=recognitions.completed_with_results&inactivity_timeout=-1'
},
_ended:false,
_ending:true,
_redirectCount:0,
_redirects:[
],
_requestBodyLength:0,
_requestBodyBuffers:[
],
_onNativeResponse:[
Function
],
_currentRequest:ClientRequest {
_events:[
Object
],
_eventsCount:6,
_maxListeners:undefined,
output:[
],
outputEncodings:[
],
outputCallbacks:[
],
outputSize:0,
writable:true,
_last:true,
chunkedEncoding:false,
shouldKeepAlive:false,
useChunkedEncodingByDefault:true,
sendDate:false,
_removedConnection:false,
_removedContLen:false,
_removedTE:false,
_contentLength:null,
_hasBody:true,
_trailer:'',
finished:false,
_headerSent:false,
socket:null,
connection:null,
_header:null,
_onPendingData:[
Function:noopPendingOutput
],
agent:[
Agent
],
socketPath:undefined,
timeout:undefined,
method:'POST',
path:'/speech-to-text/api/v1/recognitions?callback_url=<my_callback_url>&events=recognitions.completed_with_results&inactivity_timeout=-1',
_ended:false,
res:null,
aborted:1558070725953,
timeoutCb:null,
upgradeOrConnect:false,
parser:null,
maxHeadersCount:null,
_redirectable:[
Circular
],
[
Symbol(isCorked)
]:false,
[
Symbol(outHeadersKey)
]:[
Object
]
},
_currentUrl:'https://stream.watsonplatform.net/speech-to-text/api/v1/recognitions?callback_url=<my_callback_url>&events=recognitions.completed_with_results&inactivity_timeout=-1'
}
}
Try adding maxContentLength: Infinity as an option when instantiating SpeechToText
const speechToText = new SpeechToTextV1({
username: 'user',
password: 'pass',
version: '2019-01-01',
maxContentLength: Infinity,
});
the limit is 1GB, please make sure you are using Chunked transfer encoding in the submission, that a typical cause of errors when feeding large files. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Transfer-Encoding
I'm using Heroku, which means I have to upload multiple large files to S3 directly.. I'm using Rails 3.2.11, and Ruby 1.9.3. I do not wish to use carrierwave or paperclip gems, or really change much at this point - I just need to get this what I have working.
Before trying to move to S3, if I ran my app locally, I could upload multiple large files to the local file system. When I ran it on Heroku, small files upload but large ones failed. Hence the switch to S3..
I tried several tweaks, and also this link below, but it's just too much of a change to what I have that already working with the local server's file system (and Heroku as well, but Heroku just can't handle large files ..)
Tried: https://devcenter.heroku.com/articles/direct-to-s3-image-uploads-in-rails
I've tried some of the other examples here on Stack Overflow but they are too much of a change for what works locally, and well, I don't grasp everything they are doing.
Now, what happens when I do try to upload images?
It's as if the file upload works - the preview images are successfully created, but nothing is ever uploaded to Amazon s3, and I don't receive any kind of error messages (like s3 authentication failure or anything.. nothing)
What do I need to change in order to get the files over to my s3 storage, and what can I write out to console to detect problems, if any, connecting to my s3?
My form:
<%= form_for #status do |f| %>
{A FEW HTML FIELDS USED FOR A DESCRIPTION OF THE FILES - NOT IMPORTANT FOR THE QUESTION}
File:<input id="fileupload" multiple="multiple" name="image"
type="file" data-form-data = <%= #s3_direct_post.fields%>
data-url= <%= #s3_direct_post.url %>
data-host =<%=URI.parse(#s3_direct_post.url).host%> >
<%= link_to 'submit', "#", :id=>'submit' , :remote=>true%>
<% end %>
My jquery is:
....
$('#fileupload').fileupload({
formData: {
batch: createUUID(),
authenticity_token:$('meta[name="csrf-token"]').attr('content')
},
dataType: 'json',
acceptFileTypes: /(\.|\/)(gif|jpe?g|png)$/i,
maxFileSize: 5000000, // 5 MB
previewMaxWidth: 400,
previewMaxHeight: 400,
previewCrop: true,
add: function (e, data) {
tmpImg.src = URL.createObjectURL(data.files[0]) ; // create image preview
$('#'+ fn + '_inner' ).append(tmpImg);
...
My controller:
def index
#it's in the index just to simplify getting it working
#s3_direct_post = S3_BUCKET.presigned_post(key: "uploads/#{SecureRandom.uuid}/${filename}", success_action_status: '201', acl: 'public-read')
end
The element that is generated for the form is (via Inspect Element):
<input id="fileupload" multiple="multiple" name="image"
data-form-data="{"key"=>"uploads/34a64607-8d1b-4704-806b-159ecc47745e/${filename}"," "success_action_status"="
>"201"," "acl"=">"public-read"," "policy"=">"[encryped stuff - no need to post]","
"x-amz-credential"=">"
[AWS access key]/[some number]/us-east-1/s3/aws4_request"
," "x-amz-algorithm"=">"AWS4-HMAC-SHA256"
," "x-amz-date"=">"20150924T234656Z"
," "x-amz-signature"=">"[some encrypted stuff]"}"
data-url="https://nunyabizness.s3.amazonaws.com" data-host="nunyabizness.s3.amazonaws.com" type="file">
Help!
With S3 there actually is no easy out of the box solutions to upload files, because Amazon is a rather complex instrument.
I had a similar issue back in the day and spend two weeks trying to figure out how S3 works, and now use a working solution for uploading files onto S3. I can tell you a solution that works for me, I never tried the one proposed by Heroku.
A plugin of choice I use is Plupload, since it is the only component I actually managed to get working, apart from simple direct S3 uploads via XHR, and offers the use of percentage indicators and in-browser image resizing, which I find completely mandatory for production applications, where some users have 20mb images that they want to upload as their avatar.
Some basics in S3:
Step 1
Amazon bucket needs correct configuration in its CORS file to allow external uploads in the first place. The Heroku totorial already told you how to put the configuration in the right place.
http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
Step 2
Policy data is needed, otherwise your client will not be able to access the corresponding bucket file. I find generating policies to be better done via Ajax calls, so that, for example, admin gets the ability to upload files into the folders of different users.
In my example, cancan is used to manage security for the given user and figaro is used to manage ENV variables.
def aws_policy_image
user = User.find_by_id(params[:user_id])
authorize! :upload_image, current_user
options = {}
bucket = Rails.configuration.bucket
access_key_id = ENV["AWS_ACCESS_KEY_ID"]
secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
options[:key] ||= "users/" + params[:user_id] # folder on AWS to store file in
options[:acl] ||= 'private'
options[:expiration_date] ||= 10.hours.from_now.utc.iso8601
options[:max_filesize] ||= 10.megabytes
options[:content_type] ||= 'image/' # Videos would be binary/octet-stream
options[:filter_title] ||= 'Images'
options[:filter_extentions] ||= 'jpg,jpeg,gif,png,bmp'
policy = Base64.encode64(
"{'expiration': '#{options[:expiration_date]}',
'conditions': [
{'x-amz-server-side-encryption': 'AES256'},
{'bucket': '#{bucket}'},
{'acl': '#{options[:acl]}'},
{'success_action_status': '201'},
['content-length-range', 0, #{options[:max_filesize]}],
['starts-with', '$key', '#{options[:key]}'],
['starts-with', '$Content-Type', ''],
['starts-with', '$name', ''],
['starts-with', '$Filename', '']
]
}").gsub(/\n|\r/, '')
signature = Base64.encode64(
OpenSSL::HMAC.digest(
OpenSSL::Digest::Digest.new('sha1'),
secret_access_key, policy)).gsub("\n", "")
render :json => {:access_key_id => access_key_id, :policy => policy, :signature => signature, :bucket => bucket}
end
I went as far as put this method into the application controller, although you could find a better place for it.
Path to this function should be put into the route, of course.
Step 3
Frontend, get plupload: http://www.plupload.com/ make some link to act as the upload button:
<a id="upload_button" href="#">Upload</a>
Make a script that configures the plupload initialization.
function Plupload(config_x, access_key_id, policy, signature, bucket) {
var $this = this;
$this.config = $.extend({
key: 'error',
acl: 'private',
content_type: '',
filter_title: 'Images',
filter_extentions: 'jpg,jpeg,gif,png,bmp',
select_button: "upload_button",
multi_selection: true,
callback: function (params) {
},
add_files_callback: function (up, files) {
},
complete_callback: function (params) {
}
}, config_x);
$this.params = {
runtimes: 'html5',
browse_button: $this.config.select_button,
max_file_size: $this.config.max_file_size,
url: 'https://' + bucket + '.s3.amazonaws.com/',
flash_swf_url: '/assets/plupload/js/Moxie.swf',
silverlight_xap_url: '/assets/plupload/js/Moxie.xap',
init: {
FilesRemoved: function (up, files) {
/*if (up.files.length < 1) {
$('#' + config.select_button).fadeIn('slow');
}*/
}
},
multi_selection: $this.config.multi_selection,
multipart: true,
// resize: {width: 1000, height: 1000}, // currently causes "blob" problem
multipart_params: {
'acl': $this.config.acl,
'Content-Type': $this.config.content_type,
'success_action_status': '201',
'AWSAccessKeyId': access_key_id,
'x-amz-server-side-encryption': "AES256",
'policy': policy,
'signature': signature
},
// Resize images on clientside if we can
resize: {
preserve_headers: false, // (!)
width: 1200,
height: 1200,
quality: 70
},
filters: [
{
title: $this.config.filter_title,
extensions: $this.config.filter_extentions
}
],
file_data_name: 'file'
};
$this.uploader = new plupload.Uploader($this.params);
$this.uploader.init();
$this.uploader.bind('UploadProgress', function (up, file) {
$('#' + file.id + ' .percent').text(file.percent + '%');
});
// before upload
$this.uploader.bind('BeforeUpload', function (up, file) {
// optional: regen the filename, otherwise the user will upload image.jpg that will overwrite each other
var extension = file.name.split('.').pop();
var file_name = extension + "_" + (+new Date);
up.settings.multipart_params.key = $this.config.key + '/' + file_name + '.' + extension;
up.settings.multipart_params.Filename = $this.config.key + '/' + file_name + '.' + extension;
file.name = file_name + '.' + extension;
});
// shows error object in the browser console (for now)
$this.uploader.bind('Error', function (up, error) {
console.log('Expand the error object below to see the error. Use WireShark to debug.');
alert_x(".validation-error", error.message);
});
// files added
$this.uploader.bind('FilesAdded', function (up, files) {
$this.config.add_files_callback(up, files, $this.uploader);
// p(uploader);
// uploader.start();
});
// when file gets uploaded
$this.uploader.bind('FileUploaded', function (up, file) {
$this.config.callback(file);
up.refresh();
});
// when all files are uploaded
$this.uploader.bind('UploadComplete', function (up, file) {
$this.config.complete_callback(file);
up.refresh();
});
}
Plupload.prototype.init = function () {
//
}
Step 4
The implemetation of the general multi-purpose file uploader function:
ImageUploader = {
init: function (user_id, config, callback) {
$.ajax({
type: "get",
url: "/aws_policy_image",
data: {user_id: user_id},
error: function (request, status, error) {
alert(request.responseText);
},
success: function (msg) {
// set aws credentials
callback(config, msg);
}
});
},
},
// local functions
photo_uploader: function (user_id) {
var container = "#photos .unverified_images" // for example;
var can_render = false;
this.init(user_id,
{
select_button: "upload_photos",
callback: function (file) {
file.aws_id = file.id;
file.id = "0";
file.album_title = "userpics"; // I use this param to manage photo directory
file.user_id = user_id;
//console.log(file);
[** your ajax code here that saves the image object in the database via file variable you get here **]
});
},
add_files_callback: function (up, files, uploader) {
$.each(files, function (index, value) {
// do something like adding a progress bar html
});
uploader.start();
},
complete_callback: function (files) {
can_render = true;
}
}, function (config, msg) {
config.key = "users/" + user_id;
// Most important part:
window.photo_uploader = new Plupload(config, msg.access_key_id, msg.policy, msg.signature, msg.bucket);
});
}
can_render variable is useful so that you can make the application only then re-render the page, when the uploader is actually done.
And to make the button work from somewhere else call:
ImageUploader.photo_uploader(user_id);
And the button will act as a Plupload uploader button.
What is important is that Policy is made in a way so that noone can upload the photo into someone else's directory.
It would be great to have a version that does the same not via ajax callbacks, but with web hooks, this is something I want to do in the future.
Again, this is not a perfect solution, but something that works good enough from my experience for the purpose of uploading images and videos onto amazon.
Note in case someone asks why I have this complex object-oriented structure of uploader objects, the reason is that my application has all different kinds of uploaders that behave differently and they need to have an initializer with common behavior. The way I did it I can write an initializer for, say, videos, with minimum amount of code, that will do similar things to the existing image uploader.
I'm in the process of writing a node wrapper for RavenDB.
I'm using version 3 but as there are no HTTP docs for it, I've been relying on the 2.0 and 2.5 docs.
In regards to single document operations, I've used this doc page successfully for PUTs, DELETEs and multiple PATCHs to individual documents.
Similarly, I've used this doc page successfully for multiple PUTs and DELETEs of several documents in one HTTP call but the docs are a bit vague in regards to PATCHing mutliple documents in one call.
Under the "Batching Requests" heading, it clearly states it's possible:
Request batching in RavenDB is handled using the '/bulk_docs' endpoint, which accepts an array of operations to execute. The format for the operations is:
method - PUT, PATCH or DELETE.
...
For PUTs, I POST to /bulk_docs:
[
{
Method: 'PUT',
Key: 'users/1',
Document: { username: 'dummy' }
Metadata: { 'Raven-Entity-Type': 'Users' }
},
...
]
For DELETEs, I POST to /bulk_docs:
[
{
Method: 'DELETE',
Key: 'users/1'
},
...
]
For PATCHs, I've tried POSTing the following without any luck:
[
{
Method: 'PATCH',
Key: 'users/1',
Document: {
Type: 'Set',
Name:'username',
Value: 'new-username'
}
},
...
]
and
[
{
Method: 'PATCH',
Key: 'users/1',
Type: 'Set',
Name:'username',
Value: 'new-username'
},
...
]
All I'm getting back is 500 - Internal Server Error and without any examples of PATCHing multiple documents on that docs page I'm kind of stuck...
Any help would be appreciated :)
The structure for PATCH is :
[
{
Method: 'PATCH',
Key: 'users/1',
Patches: [{
Type: 'Set',
Name:'username',
Value: 'new-username'
}]
},
...
]
The full structure can be see here:
https://github.com/ayende/ravendb/blob/master/Raven.Abstractions/Commands/PatchCommandData.cs#L72
Is there a way to output the json-string read by my store in sencha touch 2?
My store is not reading the records so I'm trying to see where went wrong.
My store is defined as follows:
Ext.define("NotesApp.store.Online", {
extend: "Ext.data.Store",
config: {
model: 'NotesApp.model.Note',
storeId: 'Online',
proxy: {
type: 'jsonp',
url: 'http://xxxxxx.com/qa.php',
reader: {
type: 'json',
rootProperty: 'results'
}
},
autoLoad: false,
listeners: {
load: function() {
console.log("updating");
// Clear proxy from offline store
Ext.getStore('Notes').getProxy().clear();
console.log("updating1");
// Loop through records and fill the offline store
this.each(function(record) {
console.log("updating2");
Ext.getStore('Notes').add(record.data);
});
// Sync the offline store
Ext.getStore('Notes').sync();
console.log("updating3");
// Remove data from online store
this.removeAll();
console.log("updated");
}
},
fields: [
{
name: 'id'
},
{
name: 'dateCreated'
},
{
name: 'question'
},
{
name: 'answer'
},
{
name: 'type'
},
{
name: 'author'
}
]
}
});
you may get all the data returned by the server through the proxy, like this:
store.getProxy().getReader().rawData
You can get all the data (javascript objects) returned by the server through the proxy as lasaro suggests:
store.getProxy().getReader().rawData
To get the JSON string of the raw data (the reader should be a JSON reader) you can do:
Ext.encode(store.getProxy().getReader().rawData)
//or if you don't like 'shorthands':
Ext.JSON.encode(store.getProxy().getReader().rawData)
You can also get it by handling the store load event:
// add this in the store config
listeners: {
load: function(store, records, successful, operation, eOpts) {
operation.getResponse().responseText
}
}
As far as I know, there's no way to explicitly observe your response results if you are using a configured proxy (It's obviously easy if you manually send a Ext.Ajax.request or Ext.JsonP.request).
However, you can still watch your results from your browser's developer tools.
For Google Chrome:
When you start your application and assume that your request is completed. Switch to Network tab. The hightlighted link on the left-side panel is the API url from which I fetched data. And on the right panel, choose Response. The response result will appear there. If you have nothing, it's likely that you've triggered a bad request.
Hope this helps.
Your response json should be in following format in Ajax request
{results:[{"id":"1", "name":"note 1"},{"id":"2", "name":"note 2"},{"id":"3", "name":"note 3"}]}
id and name are properties of your model NOte.
For jsonp,
in your server side, get value from 'callback'. that value contains a name of callback method. Then concat that method name to your result string and write the response.
Then the json string should be in following format
callbackmethod({results:[{"id":"1", "name":"note 1"},{"id":"2", "name":"note 2"},{"id":"3", "name":"note 3"}]});