Given 2 RenderPasses, A and B, and an attachment X accessed by both, if A does a .storeOp=store on X on its last subpass, and B does a .loadOp=load on X on its first subpass, can B read from X as an input attachment?
Futhermore, I can think of 3 ways of reading attachment data from a previous RenderPass.
Using a sampler.
(Potentially) as an input attachment.
As a storage image.
Are there any other ways?
Once a render pass instance has concluded, all attachments cease to be attachments. They're just regular images from that point forward. The contents of the image are governed by the render pass's storage operation. But once the storage operation is done (subject to the correct use of dependencies), the image has the data produced by the storage operation.
So there is no such thing as an attachment "from a previous RenderPass". There is merely an image and its data. How that image got its data (again, subject to the correct use of dependencies) irrelevant to how you're going to use it now. The data is there, and it can be accessed in any way that any image is access, subject only to the restrictions you choose to impose.
So if an image has some data, and you use it as an attachment, and you use a load operation of load, the data in that attachment will have the image data from the image before becoming an attachment regardless of how the data got there. That's how load operations work.
Related
I understand that in order to upload a file to Amazon S3 using Multipart, the instructions are here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/llJavaUploadFile.html
How do I go about replacing the bytes (say, between the range 4-1523) of an uploaded file? Do I need to make use of Multipart Upload to achieve this? or do I fire a REST call with the range specified in the HTTP header?
Appreciate any advice.
Objects in S3 are immutable.
If it's a small object, you'll need to upload the entire object again.
If it's an object over 5MB in size, then there is a workaround that allows you to "patch" a file, using a modified approach to the multipart upload API.
Background:
As you know, a multipart upload allows you to upload a file in "parts," with minimum part size 5MB and maximum part count 10,000.
However a multipart "upload" doesn't mean you have to "upload" all the data again, if some or all of it already exists in S3, and you can address it.
PUT part/copy allows you to "upload" the individual parts by specifying octet ranges in an existing object. Or more than one object.
Since uploads are atomic, the "existing object" can be the object you're in the process of overwriting, since it remains unharmed and in place until you actually complete the multipart upload.
But there appears to be nothing stopping you from using the copy capability to provide the data for the parts you want to leave the same, avoiding the actual upload then using a normal PUT part request to upload the parts that you want to have different content.
So, while not a byte-range patch with granularity to the level of 1 octet, this could be useful for emulating an in-place modification of a large file. Examples of valid "parts" would be replacing a minimum 5 MB chunk, on a 5MB boundary, for files smaller than 50GB, or replacing a mimimum 500MB chunk on 500MB boundary for objects up to 5TB, with minimum part sizes varying between those to extremes, because of the requirement that a multipart upload have no more than 10,000 parts. The catch is that a part must start at an appropriate offset, and you need to replace the whole part.
Michael's answer is pretty explanatory on the background of the issue. Just adding the actual steps to be performed to achieve this, in case you're wondering.
List object parts using ListParts
Identify the part that has been modified
Start a multipart upload
Copy the unchanged parts using UploadPartCopy
Upload the modified part
Finish the upload to save the modification
Skip 2 if you already know which part has to be changed.
Tip: Each part has an ETag, which is MD5 hash of the specified part. This can be used to verify is that particular part has been changed.
The multipart upload overview documentation has, in the Multipart Upload Listings section, the following warning:
Note
Only use the returned listing for verification. You should not use the result of this listing when sending a complete multipart upload request. Instead, maintain your own list of the part numbers you specified when uploading parts and the corresponding ETag values that Amazon S3 returns.
Why?
Why I ask: Let's say I want to support resuming an upload that is interrupted. Doing so means knowing what remains to be uploaded, and therefore what already was uploaded. Knowing this is simpler if I may disregard the above warning. S3 is persisting the list of already-uploaded parts. I can obtain it from List Parts.
Whereas if I heed that warning, instead I'd need to intercept break or kill signals and persist the uploaded parts list locally. Although that's feasible, it seems silly to do this if S3 already has the list.
Furthermore, the warning says to use List Parts "only for verification". OK. Let's say I persist my own list, and compare it to List Parts. If they do not match, what am I going to do? I'm going to believe List Parts -- if S3 doesn't think it has a part, of course I'm going to upload it again. Therefore if List Parts is the ultimate authority, why not simply use it in the first place, and use it alone?
If they do not match, what am I going to do? I'm going to believe List Parts -- if S3 doesn't think it has a part, of course I'm going to upload it again.
You're missing the point of the warning.
It's not so much about whether parts were received. It's about whether they were received intact.
When you complete a multipart upload, you have to send a list of the parts and their etags. The etags are the hex md5sum of each part.
The lazy and careless way to complete a multipart upload would be to blindly submit the etags of the parts by just reading them from the "list" operation.
That is what they are warning against.
The correct way is to use your locally-created list, based on what you think S3 should have received, what you think the etag of each part should have been, based on the local file.
If you are resuming an upload that was interrupted, you should go back and compare the parts already uploaded (by re-reading and re-checksumming the parts of the local file) against the checksums S3 has calculated against the parts already stored (as returned by the list operation)... then either resend any incorrect parts or missing parts, or abandon the upload because the local file may have changed if one or more parts doesn't match your local calculation.
Additionally, in the interest of data integrity, you should be sending the md5 of each part with the individual part uploads, base64-encoded, with a Content-MD5 header, since this will cause S3 to refuse to accept a part that has been corrupted in any way during the upload.
I am trying to save my NSData using writeImageDataToSavedPhotosAlbum.
My NSdata size is '49894' and I saved it using writeImageDataToSavedPhotosAlbum. if I read my saved image raw Data bytes using ALAssetsLibrary, I am getting my image size as '52161'.
I am expecting both as same. Can somebody guide me what is going wrong ?
Below link also not providing the proper solution.
saving image using writeImageDataToSavedPhotosAlbum modifies the actual image data
You can not and should not rely on the size, firstly because you don't know what the private implementation does and secondly because the image data is supplied with metadata (and if you don't supply metadata then a number of default values will be applied).
You can check what the metadata contains for the resulting asset and see how it differs from the metadata you supplied.
If you need to save exactly the same bytes, and / or you aren't saving valid image files then you should not use the photo library. Instead you should save the data to disk in your app sandbox, either in the documents or library directory.
I have a JSON feed which contains URLs for images. I am using NSURLConnection to download the JSON feed extract the URLs. I want to download all the images asynchronously. I subclassed UIImage and sent that class a URL which it downloads, one image at at a time, in an asynchronous manner.
First, is that a good way to do it? Second, I'd like to show four images at a time. Shouldn't I download every set of four together instead of downloading one by one?
My second concern is that I have twoNSURLConnections. That's probably bad. Should I use the very same NSURLConnection to download the JSON feed and at the same time get the image?
I am trying to display four images at a time, with a next button that displays the next four on the next line.
I am not sure UIImage is expected to be subclassed. If you need good design practice, you should have a look at TopPaid
sample code from Apple, as it is showing how to properly download a feed and then asynchronously download images. Take a close look at the IconDownloader class from this project, which is a class handling image downloading, and notifying its delegate when it's finished.
As far as I know there is no problem with having multiple NSURLConnections at a time. You might run into trouble if the number of connections becomes very large, because this could saturate the number of open file descriptors allowed on iPhone, or more likely create a memory warning. In your case if you only have 2 connections, you don't have any problem.
I am trying to establish the best practice for handling the creation of child objects when the parent object is incomplete or doesn't yet exist in a web application. I want to handle this in a stateless way so in memory objects are out.
For example, say we have a bug tracking application.
A Bug has a title and a description (both required) and any number of attachments. So the "Bug" is the parent object with a list of "Attachment" children.
So you present a page with a title input, a description input, and a file input to add an attachment. People then add the attachments but we haven't created the Parent Bug as yet.
How do you handle persisting the added attachments ?
Obviously we have to keep track of the attachments added, but at this point we haven't persisted the parent "Bug" object to attach the "Attachment" to.
Create the incomplete bug and define a process for handling incompleteness. (Wait X minutes/hours/days, then delete it/email someone/accept it as it is.) Even without a title or description, knowing that a problem occurred and the information in the attachment is potentially useful. The attachment may include a full description, but the user just put it somewhere other than you'd attended. Or it may only contain scattered data points which are meaningless on their own - but could corroborate another user's report.
I would normally just create the Bug object and its child, the Attachment object, within the same HTTP response after the user has submitted the form.
If I'm reading you right, the user input consists of a single form with the aforementioned fields for bug title, description, and the attached file. After the user fills out these fields (including the selection of a file to upload), then clicks on Submit, your application receives all three of these pieces of data simultaneously, as POST variables in the HTTP request. (The attachment is just another bit of POST data, as described in RFC 1867.)
From your application's end, depending on what kind of framework you are using, you will probably be given a filename pointing to the location of the uploaded file in some suitable temporary directory. E.g., with Perl's CGI module, you can do:
use CGI qw(:standard);
my $query = CGI->new;
print "Bug title: " . $query->param("title") . "\n";
print "Description: " . $query->param("description"). "\n";
print "Path to uploaded attachment: " . $query->param("attachment") . "\n";
to obtain the name of the uploaded file (the file data sent through the form by your user is automatically saved in a temporary file for your convenience), along with its metadata. Since you have access to both the textual field data and the file attachment simultaneously, you can create your Bug and Attachment objects in whatever order you please, without needing to persist any incomplete data across HTTP responses.
Or am I not understanding you here?
In this case I would consider storing the attachments in some type of temporary storage, be it Session State, a temp directory on the file system, or perhaps a database. Once the bug has been saved, I would then copy the attachments to their actual place.
Careful with session though, if you let them upload large attachments you could push memory issues depending on your environment.
One approach I saw before was as soon as the user opens the new bug form, a new bug is generated in your database. Depending on your app this may or may not be a good thing. If your collecting data from a user for example, this is useful as you get some intelligence even if they fail to enter the data, and leave your site. You still know they started the process, and whatever else you collected like user agent etc..