Objective-C/Cocoa Chunked File Upload - objective-c

I'm relatively new to Objective-C/Cocoa development. I'm currently working on a Mac application where i need to upload a file to a web server using HTTP PUT requests. I'd like to break up the file to several chunks and stream it to the server rather than reading the whole file into the memory and uploading it in one go.
I have come across several third party libraries (ie: ASIHTTPRequest, AFNetworking) which can support this functionality out of the box. However, i'd like to go ahead without using third parties for the time being due to several constraints of the project.
Any assistance in this regard is greatly appreciated. Thanks in advance :)

If you are just uploading a file, without a Multipart MIME wrapper, then I believe you can setup an input stream directly from NSMutableURLRequest. Getting an NSInputStream for a file on-disk is easy, using +[NSInputStream inputStreamWithFileAtPath:]. I've not done this exactly myself, but I think it will work.
If you do end up needing to do a Multipart MIME wrapper, then I'd recommend using a library. It is a total pain to get right, and has some quirks to deal with depending on what OS version you are running on.

Related

Somehow send command line commands on windows externally and get back the response

Problem: Need to convert local html (with local images etc) to pdf from an AIX box running Universe 11.2.5 with System Builder
Current solution: FTP over html file to a Windows server which converts in batches and sends the e-mail to the destination
Proposed Solution: Do everything on the AIX box, from converting html to pdf and sending the e-mail.
Current problem: Unable to find a way to convert local html to PDF on the AIX box. I have been trying many different ways from trying to install Python3, but to no avail.
The only really difficult part of the process is getting the HTML to render into a format will properly display your html into pages that are suitable for printing. There is a fair amount of magic that goes on between HTTP:GET and clicking print on a browser window that needs to be accounted for.
I was trying accomplish something similar many moons ago on AIX but kind of ran into a skill level/time wall because I was going to have essentially create a headless browser to render the html. It looks like there are now some utilities that you might be able to leverage. I found this recent updated article on Super User that actually got me somewhat excited, especially since I don't use AIX anymore so precompiled binaries and well understood and easily attainable dependencies are something I can actually have in my life.
https://superuser.com/questions/280552/how-can-i-render-a-website-as-an-image-from-the-shell
Good Luck.
There seems to be several questions rolled into this one item.
Converting HTML to PDF, while that is just a data manipulation that you could do in basic, writing such code would be a large task. The option you use sending it to another system is valid, but put more points of failure into the system. I would think you could find code to do it on the AIX box.
Rocket plans on getting the MV Python to work on AIX, this will make the converting of html to PDF much easier since there are a lot of open source modules.
As for my suggestion of using sockets, that would be if you intend to send it to a service that will take the htms, and return the pdf document.
i.e. Is there a web service for converting HTML to PDF?
Once you have the pdf document, you can either store it in a UniVerse type-19 file, or do the base64 encoding and store it in UniVerse hash file.
Hope this helps,
Mike

Send very large file (>> 2gb) via browser

I have a task to do. I need to build a WCF service that allow a client to import a file inside a database using the server backend. In order to do this, i need to communicate to the server, the setting, the events needed to start and set the importation and most importantly the file to import. Now the problem is that these files can be extremely large (much bigger then 2gb), so it's not possible to send them via browser as they are. The only thing that comes into my mind is to split these files and send them one by one to the server.
I have also another requirement: i need to be 100% sure that this file are not corrupted, so i need to implement also a sort of policy for correction and possibly recover of the errors.
Do you know if there is a sort of API or dll that can help me to achieve my goals or is it better to write the code by myself? And in this case, which would be the optimal size of the packets?

win8 store app access local storage

I am developing a Win8 Store app which allows users to download different types of files from an online learning platform and store them locally. I am also considering the function to help users organize these downloaded files by placing them in different folders (based on course name and etc.).
I was using Documents Library previously. But for every type of file that the user could download, I need to add a file type association, which does not make a lot of sense since my app would be able to open such files. So which local storage should my app use?
Many thanks in advance.
Kaizhi
The access to storage by Windows Store apps is quite restrictive, especially the DocumentsLibrary.
As you have noticed, you need to declare a file type association for every file type you want to read from or write to the DocumentsLibrary. This means your app need to handle file activations for these types in a meaningful way, which your app probably should not do.
But even if you jump through this hoop, there is another one that is not documented on the MSDN page of the DocumentsLibrary, but "hidden" in a lengthy page about app capability declarations: According to the current rules, you are not allowed to use the DocumentsLibrary for anything but offline access to SkyDrive! Bummer...
So what's left?
You can use SkyDrive or another cloud storage to put files in a well known place (which might or might not be somewhere on the hard disk). This is probably both overkill and undesirable in your case.
Or you save the files in the local app storage, provide your own in-app file browser and open the files with their default app. Seems viable to me.
Or, maybe, you can do something with share contracts or other contracts. I don't know much about these yet, but I doubt that they are helpful in your situation.
And that's it...
(Based on my current experience. No guaranty for correctness or completeness)

Dropbox API - updating file

I'm new to the Dropbox API and I'm trying to figure out if I can manage large file (1-2 Gb) updates without having to copy over the whole file.
Something similar to the initial chunked upload. I'd like a way to say, here's a chunk of this file, starting at offset XXXX and here's the new content. And send only 100Kb or 1Mb, but not the whole file over!
I'm surprised nobody needs something like that, since once you upload a large file, which is pretty easy to do with the chunked upload, one would still have an issue if the file needed updating. Especially if the updates are small!
Anyway, all feedback is appreciated!
That's very much incremental sync. There's a free one doing this, by the name of OpenVCDiff. But when it comes to dropbox API, I don't know - I found your post when I was searching for dropbox here within SO.
The Dropbox API doesn't support any sort of incremental update.

iPad - how should I distribute offline web content for use by a UIWebView in application?

I'm building an application that needs to download web content for offline viewing on an iPad. At present I'm loading some web content from the web for test purposes and displaying this with a UIWebView. Implementing that was simple enough. Now I need to make some modifications to support offline content. Eventually that offline content would be downloaded in user selectable bundles.
As I see it I have a number of options but I may have missed some:
Pack content in a ZIP (or other archive) file and unpack the content when it is downloaded to the iPad.
Put the content in a SQLite database. This seems to require some 3rd party libs like FMDB.
Use Core Data. From what I understand this supports a number of storage formats including SQLite.
Use the filesystem and download each required file individually. OK, not really a bundle but maybe this is the best option?
Considerations/Questions:
What are the storage limitations and performance limitations for each of these methods? And is there an overall storage limit per iPad app?
If I'm going to have the user navigate through the downloaded content, what option is easier to code up?
It would seem like spinning up a local web server would be one of the most efficient ways to handle the runtime aspects of displaying the content. Are there any open source examples of this which load from a bundle like options 1-3?
The other side of this is the content creation and it seems like zipping up the content (option 1) is the simplest from this angle. The other options would appear to require creation of tools to support the content creator.
If you have the control over the content, I'd recommend a mix of both the first and the third option. If the content is created by you (like levels, etc) then simply store it on the server, download a zip and store it locally. Use CoreData to store an Index about the things you've downloaded, like the path of the folder it's stored in and it's name/origin/etc, but not the raw data. Databases are not thought to hold massive amounts of raw content, rather to hold structured data. And even if they can -- I'd not do so.
For your considerations:
Disk space is the only limit I know on the iPad. However, databases tend to get slower if they grow too large. If you barely scan though the data, use the file system directly -- may prove faster and cheaper.
The index in CoreData could store all relevant data. You will have very easy and very quick access. Opening a content will load it from the file system, which is quick, cheap and doesn't strain the index.
Why would you do so? Redirect your WebView to a file:// URL will have the same effect, won't it?
Should be answered by now.
If you don't have control then use the same as above but download each file separately, as suggested in option four. after unzipping both cases are basically the same.
Please get back if you have questions.
You could create a xml file for each bundle, containing the path to each file in the bundle, place it in a folder common to each bundle. When downloading, download and parse the xml first and download each ressource one by one. This will spare you the overhead of zipping and unzipping the content. Create a folder for each bundle locally and recreate the folder structure of the bundle there. This way the content will work online and offline without changes.
With a little effort, you could even keep track of file versions by including version numbers in the xml file for each ressource, so if your content has been partially updated only the files with changed version numbers have to be downloaded again.