I'm successfully using yajl-objc along with ASIHTTPRequest in an iPhone project that does network access and pulls down and parses JSON data. ASIHTTPRequest allows gzipped HTTP responses by default, which is great, but I'm using the streaming parser ability of YAJL and it rightfully chokes on gzipped data. I can wait until the HTTP request has finished then un-gzip and parse the response, but I'm going for speed here and would like to parse the gzipped data as it downloads.
Is it possible to un-gzip data on the fly, parse the JSON within, then forget about that chunk of gzipped data?
If this last part could be solved, this setup seems like it would make for a great system:
YAJL is one of the fastest JSON parsers around
ASIHTTPRequest is easy and asynchronous
Response bodies could be gzipped, saving on-the-wire traffic
JSON could be parsed without loading the whole tree into constrained device memory
Any guidance would be greatly appreciated!
YES: http://groups.google.com/group/asihttprequest/browse_thread/thread/ee2e44379b181439/7699dd200780cd32#7699dd200780cd32
Related
I have run into an issue with using API Gateway as a proxy to S3 (for custom authentication), in that it does not handle binary data well (which is a known issue).
I'm usually uploading either .gz or .Z (Unix compress utility) files. As far as I understand it, the data is not maintained due to encoding issues. I can't seem to figure out a way to decode the data back to binary.
Original leading bytes: \x1f\x8b\x08\x08\xb99\xbeW\x00\x03
After passing through API GW: ��9�W�
... Followed by filename and the rest of the data.
One way of 'getting around this' is to specify Content-Encoding in the header of the PUT request to API GW as 'gzip'. This seems to force API GW to decompress the file before forwarding it to S3.
The same does not work for .Z files compressed with the Unix compress utility. Where you should specify the Content-Encoding as 'compress'.
Does anyone have any insight about what is happening to the data, to help shed some light on my issue? Also, does anyone know any possible work-around's to maintain the encoding of my data while passing through API GW (or to decode it once it's in S3)?
Obviously I could just access the S3 API directly (or have API GW return a pre-signed URL for accessing the S3 API), but there are a few reasons why I don't want to do that.
I should mention that I don't understand very much at all about encoding - sorry if there are some obvious answers to some of my questions.
It's not exactly an "encoding issue" -- it's the fact that API Gateway just doesn't support binary data ("yet")... so it's going to potentially corrupt binary data, depending on the specifics of the data in question.
Uploading as Content-Encoding: gzip probably triggers decoding in a front-end component that is capable of dealing with binary data (gzip, after all, is a standard encoding and is binary) before passing the request body to the core infrastructure... but you will almost certainly find that this is a workaround that does not consistently deliver correct results, depending on the specific payload. The fact that it works at all seems more like a bug than a feature.
For now, the only consistently viable option is base64-encoding your payload, which increases its size on-the-wire by 33% (base64 encoding produces 4 bytes of output for every 3 bytes of input) so it's not much of a solution. Base64 + gzip with the appropriate Content-Encoding: gzip should also work, which seems quite a silly suggestion (converting a compressed file into base64 then gzipping the result to try to reduce its size on the wire) but should be consistent with what API Gateway can currently deliver.
I'm using spray-can and spray-routing to support a REST service that includes an operation to upload files. This operation accepts multipart/form-data and the formFields directive works well.
When I try to use the spray request chunking support though, I find that the formFields directive is not working as the whole multipart message is chunked and not just the one part that is large.
Has anyone got any advice on how the handle large multipart messages in spray-can?
All I can think of right now is to put the whole request data in a temp file and to use something like org.apache.commons.fileupload.MultipartStream to parse the request.
I have been writing iPhone applications for some time now, sending data to server, receiving data (via HTTP protocol), without thinking too much about it. Mostly I am theoretically familiar with process, but the part I am not so familiar is HTTP multipart request. I know its basic structure, but the core of it eludes me.
It seems that whenever I am sending something different than plain text (like photos, music), I have to use a multipart request. Can someone briefly explain to me why it is used and what are its advantages?
If I use it, why is it better way to send photos that way?
An HTTP multipart request is an HTTP request that HTTP clients construct to send files and data over to an HTTP Server. It is commonly used by browsers and HTTP clients to upload files to the server.
What it looks like
See Multipart Content-Type
See multipart/form-data
As the official specification says, "one or more different sets of data are combined in a single body". So when photos and music are handled as multipart messages as mentioned in the question, probably there is some plain text metadata associated as well, thus making the request containing different types of data (binary, text), which implies the usage of multipart.
I have found an excellent and relatively short explanation here.
A multipart request is a REST request containing several packed REST requests inside its entity.
I'm working in Xcode 4.3.2 + building for an app in iOS 5.
I've decided to use SBJson to parse streams of data from our server. I've verified that I'm receiving a valid JSON response from the server. My question concerns the design behind the classes SBJsonStreamParser and the SBJsonParser.
It appears that in SBJsonParser the method "objectWithData" takes the data received from the JSON response and uses the SBJsonStreamParserAccumulator to append the stream of data into a single JSON document. Once the data stream is gathered into one object, it is then parsed by the "parse" method in SBJsonStreamParser.
I've run into several issues when requesting larger JSON documents. The size of the responses seem to be reasonable (specially 9.4 KB response). It appears that the SBJsonStreamParser breaks when getting a data stream greater than a certain size. The parser succeeds when the response is small (~3KB), but fails when the response is larger (~10KB).
I used NSLog to verify that in both cases, pulling a small & large stream, the methods are successfully receiving the full json document - because it looks like [{"id": .... 123}]. I'm convinced that the issue is that the data stream is too long.
I'm wondering if I'm using SBJson incorrectly or is this simply a limitation of the parser? Is there anything that I can configure that allows SBJsonStreamParser to not throw an error for larger (but reasonable) data streams & continue to parse the full response?
Thanks in advance!
Actually you have the workings of objectWithData: backwards. SBJsonStreamParserAccumulator is used to accumulate the parsed output, not the unparsed data stream.
I have a requirement where user can upload files present in app to SharePoint via same app.
I tried using http://schemas.microsoft.com/sharepoint/soap/CopyIntoItems method of sharepoint. But it needs file in base64 encoded format to be embedded into body of SOAP request . My code crashed on device when I tried to convert even a 30 MB file in base64 encoded string? Same code executed just fine on simulator
Is there any other alternative to upload files (like file streaming etc) onto sharepoint?? I may have to upload files upto 500 MB? Is there more efficient library to convert NSData into base64 encoded string for large file???
Should I read file in chunks and then convert that into base64 encoded string and upload file once complete file is converted? Any other appraoches???
First off, your code probably crashed because it ran out of memory. I would do a loop where I read chunks that I converted and then pushed to a open socket. This probably means that you need to go to a lower level than NSURLConnection, I have tried to search for NSURLConnection and chunked upload without much success.
Some seem to suggest using ASIHttp, but looking at the homepage it seems abandoned by the developer, so I can't recommend that.
AFNetworking looks really good, it has blocks support and I can see in the example on the first page how it could be used for you. Look at the streaming request example. Basically create a NSInputStream that you push chunked data to and use it in a AFHTTPURLConnectionOperation.