using Microsoft.Azure.Storage.DataMovement set metadata of large files upload - azure-storage

I am using Azure.Storage..Data Movement to upload large files to Storage account in a ASp.net web form 4.5 . I dont see a way to set the metadata of uploaded blob options without getting reference. What is the best way to do it using this class. Blobcloudoptions has metadata option , but in TransferManager.UploadAsync(....).
Any pointer is appreciated.
Regards
Sunny

Actually, the TransferManager.UploadAsync() method does not support set metadata directly. You can go through the source code for more details.
Instead, we need to get a reference of the blob first -> then set the metadata -> then upload it via the TransferManager.UploadAsync() method. Like below:
//other code
//get a blob reference
CloudBlockBlob destBlob = blobContainer.GetBlockBlobReference("myblob.txt");
//set metadata here
destBlob.Metadata.Add("xxx", "yyy");
//other code
//then use the following method to upload file
TransferManager.UploadAsync(xxx);

Related

How to check whether Azure Blob Storage upload was successful?

I'm using an Azure SAS URL to upload a file to a blob storage:
var blockBlob = new Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob(new System.Uri(sasUrl));
blockBlob.UploadFromFile(filePath);
The file exists on my disk, and the URL should be correct since it is automatically retrieved from the Windows Store Ingestion API (and, if I slightly change one character in the URL's signature part, the upload fails with HTTP 403).
However, when checking
var blobs = blockBlob.Container.ListBlobs();
the result is Count = 0, so I'm wondering if the upload was successful? Unfortunately, the UploadFromFile method (similarly to the UploadFromStream method) has no return type, so I'm not sure how to retrieve the upload's result).
If I try to connect to the SAS URL using Azure Storage Explorer, listing
blob containers fails with the error "Authentication Error. Signature fields not well formed". I tried URL escaping the URL's signature part since that seems to be the reason for that error in some similar cases, but that doesn't solve the problem.
Is there any way to check the status of a blob upload? Has anybody an idea why an auto-generated URL (delivered by one of Microsoft's official APIs) can not be connected to using Azure Explorer?
Please examine the sp field of your SAS. It shows the rights you are authorized to do with the blob. For example, sp=rw means you could read the blob and write content to the blob using this SAS. sp=w means you can only write content to the blob using this SAS.
If you have the read right, you can copy the SAS URL to the browser address bar. The browser will download or show the blob content for you.
Is there any way to check the status of a blob upload?
If no exception throws from your code, it means the blob has been uploaded successfully. Otherwise, a exception will be thrown.
try
{
blockBlob.UploadFromFile(filePath);
}
catch(Exception ex)
{
//uploaded failed
}
You can also confirm it using any web debugging proxy tool(ex. Fiddler) to capture the response message from storage server. 201 Created status code will be returned if the blob has been uploaded successfully.
Has anybody an idea why an auto-generated URL (delivered by one of Microsoft's official APIs) can not be connected to using Azure Explorer?
Azure Storage Explorer only allow us to connect a Storage Account using SAS or attach a storage service (blob container, queue, or table) using an SAS. It doesn't allow us to connect a blob item using SAS.
In case on synchronous UPLOAD, we can try Exception based approach and also we can cross check "blockBlob.Properties.Length". Before uploading file, its "-1" and after upload completes, it become size of file got uploaded.
So we can add check, to verify block length, which will give details on state of upload.
try
{
blockBlob.UploadFromFile(filePath);
if(blockBlob.Properties.Length >= 0)
{
// File uploaded successfull
// You can take any action.
}
}
catch(Exception ex)
{
//uploaded failed
}

S3 Copy Object with new metadata

I am trying to set the Cache-Control Header on all our existing files in the s3 storage by executing a copy to the exact same key but with new metadata. This is supported by the s3 api through the x-amz-metadata-directive: REPLACE Header. In the documentation to the s3 api compatability at https://docs.developer.swisscom.com/service-offerings/dynamic.html#s3-api the Object Copy method is neither listed as supported or unsupported.
The copy itself works fine (to another key), but the option to set new metadata does not seem to work with either copying to the same or a different key. Is this not supported by the ATMOS s3-compatible API and/or is there any other way to update the metadata without having to read all the content and write it back to the storage?
I am currently using the Amazon Java SDK (v. 1.10.75.1) to make the calls.
UPDATE:
After some more testing it seems that the issue I am having is more specific. The copy works and I can change other metadata like Content-Disposition or Content-Type successfully. Just the Cache-Control is ignored.
As requested here is the code I am using to make the call:
BasicAWSCredentials awsCreds = new BasicAWSCredentials(accessKey, sharedsecret);
AmazonS3 amazonS3 = new AmazonS3Client(awsCreds);
amazonS3.setEndpoint(endPoint);
ObjectMetadata metadata = amazonS3.getObjectMetadata(bucketName, storageKey).clone();
metadata.setCacheControl("private, max-age=31536000");
CopyObjectRequest copyObjectRequest = new CopyObjectRequest(bucketName, storageKey, bucketName, storageKey).withNewObjectMetadata(metadata);
amazonS3.copyObject(copyObjectRequest);
Maybe the Cache-Control header on the PUT (Copy) request to the API is dropped somewhere on the way?
According to the latest ATMOS Programmer's Guide, version 2.3.0, Table 11 and 12, there's nothing specified that COPY of objects are unsupported, or supported either.
I've been working with ATMOS for quite some time, and what I believe is that the S3 copy function is somehow internally translated to a sequence of commands using the ATMOS object versioning (page 76). So, they might translate the Amazon copy operation to "create a version", and then, "delete or truncate the old referenced object". Maybe I'm totally wrong (since I don't work for EMC :-)) and they handle that in a different way... but, that's how I see through reading the native ATMOS API's documentation.
What you could try to do:
Use the native ATMOS API (which is a bit painful, yes, I know), and then, create a version of the original object (page 76), update the metadata of such version (User Metadata, page 12), and then restore the version to the top-level object (page 131). After that, check if the metadata will be properly returned in the S3 API.
That's my 2 cents. If you decide to try such solution, post it here if that worked.

Update globalChannelMap in Mirth Connect

I inherited a Mirth Connect (v2.2.1) instance and am learning how it works. I'm now learning how globalChannelMap variables work, and I'm stumped by a misbehaving filter on a source connector.
In theory I can edit a csv text file in the Mirth Connect folders directory to update the globalChannelMap that is called by the filter.
But in practice the csv file is updated yet the source connector filter continues to call a prior globalChannelMap for the txt file. What step am I missing to update the globalChannelMap? Is there a simple way to output the current contents of a globalChannelMap?
You may need to redeploy. If you're seeing that you're using an old global channel map (using calKno's method), it means you need to redeploy the channel.
Channel's need to be redeployed anytime their code content is changed, be it an internal library (such as a code template), a transformer, or a global channel map.
You can get the map at the beginning of your filter and update it at the end or wherever it makes sense.
//get map
var map = globalChannelMap.get('mapName');
//log map value
logger.info('This is your map content: '+map);
//update map value
globalChannelMap.put('mapName', value);

AWS ObjectMetaData

I read this post: Changing Meta data (last modified) on an S3 object
I am programming in VB.Net. When try to use this:
Dim iMetADatA As New ObjectMetadata
VS informs me that "ObjectMetadata" is not defined.
The class I am doing this in has all the AWS tools in the header
Imports Amazon
Imports Amazon.S3
Imports Amazon.S3.Model
Imports Amazon.S3.Transfer
I want to modify (i.e. "copy"/"replace") the meta data date and time after a file is uploaded.
Make sure that you are using the datatypes specified in the official API spec here:http://docs.aws.amazon.com/sdkfornet/latest/apidocs/html/N_Amazon_S3.htm.
While uploading any object to s3 , no need to update the metadata (date and time etc), automatically framework will update, using GetObjectMetadataRequest() in java, like the same we can get updated object metadata.
For more info please go through the following link: http://docs.aws.amazon.com/AmazonS3/latest/dev/RetrievingObjectUsingNetSDK.html

Save web pages with all their images, CSS, and other resources

I need to download a few web pages for later usee in my application, and I can't find an easy way to accomplish this task. I would prefer a solution where I don't need to parse the HTML to get the URLs of the images and other resources, but rather download these somehow automatically.
OK guys, here is my solution:
created my own cache object, derived from NSURLCache
added to it a "state" enum variable, with the possible states of: SAVING, LOADING, NOTHING
overwritten cachedResponseForRequest to do things according to the state
SAVING: created a NSMutableDictionary to store every download request
Downloaded the file in the request to a flat file, added the path to the file to the dictionary as an object, with the URL as the key
LOADING: used this dictionary as they did in this example to load back the stored content: http://cocoawithlove.com/2010/09/substituting-local-data-for-remote.html
set my cache object as the shared cache object using [NSURLCache setSharedURLCache:myCacheObject];
After this, when I want to save something I set the cache's state to SAVING, and load a request to an UIWebView. After this I set the state back to LOADING, load a request to an UIWebView, and if I stored my request previously, my cache will load it from the disk.
I think ASIHTTPRequest framework can be useful for you - try ASIWebPageRequest and see if it supports all features you need.