AWS ObjectMetaData - vb.net

I read this post: Changing Meta data (last modified) on an S3 object
I am programming in VB.Net. When try to use this:
Dim iMetADatA As New ObjectMetadata
VS informs me that "ObjectMetadata" is not defined.
The class I am doing this in has all the AWS tools in the header
Imports Amazon
Imports Amazon.S3
Imports Amazon.S3.Model
Imports Amazon.S3.Transfer
I want to modify (i.e. "copy"/"replace") the meta data date and time after a file is uploaded.

Make sure that you are using the datatypes specified in the official API spec here:http://docs.aws.amazon.com/sdkfornet/latest/apidocs/html/N_Amazon_S3.htm.

While uploading any object to s3 , no need to update the metadata (date and time etc), automatically framework will update, using GetObjectMetadataRequest() in java, like the same we can get updated object metadata.
For more info please go through the following link: http://docs.aws.amazon.com/AmazonS3/latest/dev/RetrievingObjectUsingNetSDK.html

Related

using Microsoft.Azure.Storage.DataMovement set metadata of large files upload

I am using Azure.Storage..Data Movement to upload large files to Storage account in a ASp.net web form 4.5 . I dont see a way to set the metadata of uploaded blob options without getting reference. What is the best way to do it using this class. Blobcloudoptions has metadata option , but in TransferManager.UploadAsync(....).
Any pointer is appreciated.
Regards
Sunny
Actually, the TransferManager.UploadAsync() method does not support set metadata directly. You can go through the source code for more details.
Instead, we need to get a reference of the blob first -> then set the metadata -> then upload it via the TransferManager.UploadAsync() method. Like below:
//other code
//get a blob reference
CloudBlockBlob destBlob = blobContainer.GetBlockBlobReference("myblob.txt");
//set metadata here
destBlob.Metadata.Add("xxx", "yyy");
//other code
//then use the following method to upload file
TransferManager.UploadAsync(xxx);

Updated JSON file is not reading during runtime

Team,
I have service to register a user with certain data along with unique mail id and phone no in JSON file format as a body (for ex: registerbody.json).
Before Post call I am generating unique mail id , phone no and updating the same json file (registerbody.json) fields which is in the same folder where feature file locates. I see the file is updated with the required data during runtime.
I used read () method and performed POST request
Surprisingly read method is not taking updated JSON file instead it is reading old data in the registerbody.json file.
Do you have any idea on this, why it is picking up old data even though file is updated with the latest information?
Please assist me with this.
Karate uses the Java classpath, which is typically target/test-classes. So if you edit a file in src/test/java Karate won't see it unless it is copied. This copying is automatically done when you build / compile your code.
My suggestion is use target/ as a temp folder and then you can read using the file: prefix:
* def payload = read('file:some.json')
Before Post call I am generating unique mail id , phone no and updating the same json file (registerbody.json)
You are making a big mistake here, Karate specializes in updating JSON based on variables. I suggest you take 5 minutes and read this part of the docs VERY carefully: https://github.com/intuit/karate#reading-files
Especially the part about embedded expressions: https://github.com/intuit/karate#embedded-expressions

Update bigquery dataset access from java

We have a requirement where we need to give access to a particular user group in a bigquery dataset that contains views created by java code. I found that datasets.patch method can help me do it but not able to find documentation of what needs to be passed in the http request.
You can find the complete documentation on how to update BigQuery dataset access controls in the documentation page linked. Given that you are already creating the views in your dataset programatically, I would advise that you use the BigQuery client library, which may be more convenient than performing the API call to the datasets.patch method. In any case, if you are still interested in calling the API directly, you should provide the relevant portions of a dataset resource in the body of the request.
The first link I shared provides a good example of updating dataset access using the Java client libraries, but in short, this is what you should do:
public List<Acl> updateDatasetAccess(DatasetInfo dataset) {
// Make a copy of the ACLs in order to modify them (adding the required group)
List<Acl> previousACLs = dataset.getAcl();
ArrayList<Acl> ACLs = new ArrayList<>(previousACLs);
ACLs.add(Acl.of(new Acl.User("your_group#gmail.com"), Acl.Role.READER));
DatasetInfo.Builder builder = dataset.toBuilder();
builder.setAcl(ACLs);
bigquery.update(builder.build());
}
EDIT:
The way to define the dataset object is the following one:
BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
Dataset dataset = bigquery.getDataset(DatasetId.of("YOUR_DATASET_NAME"));
Take into account that if you do not specify credentials when constructing the client object bigquery, the client library will look for credentials in the GOOGLE_APPLICATION_CREDENTIALS environment variable.

S3 Copy Object with new metadata

I am trying to set the Cache-Control Header on all our existing files in the s3 storage by executing a copy to the exact same key but with new metadata. This is supported by the s3 api through the x-amz-metadata-directive: REPLACE Header. In the documentation to the s3 api compatability at https://docs.developer.swisscom.com/service-offerings/dynamic.html#s3-api the Object Copy method is neither listed as supported or unsupported.
The copy itself works fine (to another key), but the option to set new metadata does not seem to work with either copying to the same or a different key. Is this not supported by the ATMOS s3-compatible API and/or is there any other way to update the metadata without having to read all the content and write it back to the storage?
I am currently using the Amazon Java SDK (v. 1.10.75.1) to make the calls.
UPDATE:
After some more testing it seems that the issue I am having is more specific. The copy works and I can change other metadata like Content-Disposition or Content-Type successfully. Just the Cache-Control is ignored.
As requested here is the code I am using to make the call:
BasicAWSCredentials awsCreds = new BasicAWSCredentials(accessKey, sharedsecret);
AmazonS3 amazonS3 = new AmazonS3Client(awsCreds);
amazonS3.setEndpoint(endPoint);
ObjectMetadata metadata = amazonS3.getObjectMetadata(bucketName, storageKey).clone();
metadata.setCacheControl("private, max-age=31536000");
CopyObjectRequest copyObjectRequest = new CopyObjectRequest(bucketName, storageKey, bucketName, storageKey).withNewObjectMetadata(metadata);
amazonS3.copyObject(copyObjectRequest);
Maybe the Cache-Control header on the PUT (Copy) request to the API is dropped somewhere on the way?
According to the latest ATMOS Programmer's Guide, version 2.3.0, Table 11 and 12, there's nothing specified that COPY of objects are unsupported, or supported either.
I've been working with ATMOS for quite some time, and what I believe is that the S3 copy function is somehow internally translated to a sequence of commands using the ATMOS object versioning (page 76). So, they might translate the Amazon copy operation to "create a version", and then, "delete or truncate the old referenced object". Maybe I'm totally wrong (since I don't work for EMC :-)) and they handle that in a different way... but, that's how I see through reading the native ATMOS API's documentation.
What you could try to do:
Use the native ATMOS API (which is a bit painful, yes, I know), and then, create a version of the original object (page 76), update the metadata of such version (User Metadata, page 12), and then restore the version to the top-level object (page 131). After that, check if the metadata will be properly returned in the S3 API.
That's my 2 cents. If you decide to try such solution, post it here if that worked.

Uploading files using html5 FormData in dojo(without using XmlHttpRequest)

I want to upload files using FormData Object(html5) in dojo and without using XmpHttpRequest.
I am using dojo.xhrPost to upload files.
Please post your ideas/thoughts and experience.
Thanks
Mathirajan S
Based on your comment I am assuming you do want to use XHR (which would make sense given that FormData is part of the XHR2 spec).
dojo/request/xhr (introduced in Dojo 1.8) supports passing a FormData object via the data property of the options object, so that may get you what you want.
request.post(url, {
data: formdataObjectHere
// and potentially other options...
}).then(...);
The legacy dojo/_base/xhr module does not explicitly support XHR2, but it does lean on dojo/request/xhr now, so it might end up working anyway, but no guarantees there.
More information on dojo/request/xhr can be found in the Reference Guide.