How to save and get plain text data in amazon aws s3 bucket using asp.net mvc? - asp.net-mvc-4

I am trying to save plain text data at AWS S3 bucket using ASP.NET MVC can you help to achieve this ??

Save and GET data in aws s3 bucket in asp.net mvc :-
To save plain text data at amazon s3 bucket.
1.First you need a bucket created on aws than
2.You need your aws credentials like a)aws key b) aws secretkey c) region
// code to save data at aws // Note you can get access denied error. to remove this please check AWS account and give //read and write rights
Name space need to add from NuGet package
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
try`
{
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
// simple object put
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "put your plain text here",
ContentType = "text/plain",
BucketName = "put your bucket name here",
Key = "1"
//put unique key to uniquly idenitify your data
// you can pass here any data with unique id like primary key
//in db
};
PutObjectResponse response = client.PutObject(request);
}
catch(exception ex)
{
//
}
Now go to your AWS account and check the bucket you can get data with "1" Name in the AWS s3 bucket. Note:- if you get any other issue please ask me a question here will try to resolve it.
To get data from AWS s3 bucket:-
try
{
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
GetObjectRequest request = new GetObjectRequest()
{
BucketName = bucketName,
Key = "1"// because we pass 1 as unique key while save
//data at the s3 bucket
};
using (GetObjectResponse response = client.GetObject(request))
{
StreamReader reader = new
StreamReader(response.ResponseStream);
vccEncryptedData = reader.ReadToEnd();
}
}
catch (AmazonS3Exception)
{
throw;
}

Related

Jmeter-How to copy files from one AWS S3 bucket to another bucket?

I have tar.zip files placed in newbucket of AWS S3 location. I have script which will cut the file and place it in another S3 bucket. Every time I need to upload the files from local to newbucket as JSSR preprocessor to upload the files from local. Can I do copy paste of file in S3 from one bucket to another bucket ?
I think the "official" way is to use AWS CLI in general and aws s3 sync command in particular:
aws s3 sync s3://DOC-EXAMPLE-BUCKET-SOURCE s3://DOC-EXAMPLE-BUCKET-TARGET
The command can be kicked off either from JSR223 Sampler or from the OS Process Sampler
If you prefer doing this programmatically - check out Copy an Object Using the AWS SDK for Java article, the code snippet just in case:
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import java.io.IOException;
public class CopyObjectSingleOperation {
public static void main(String[] args) throws IOException {
Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String sourceKey = "*** Source object key *** ";
String destinationKey = "*** Destination object key ***";
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
// Copy the object into a new object in the same bucket.
CopyObjectRequest copyObjRequest = new CopyObjectRequest(bucketName, sourceKey, bucketName, destinationKey);
s3Client.copyObject(copyObjRequest);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

How to download a file using from s3 private bucket without AWS cli

Is it possible to download a file from AWS s3 without AWS cli? In my production server I would need to download a config file which is in S3 bucket.
I was thinking of having Amazon Systems Manger run a script that would download the config (YAML files) from the S3. But we do not want to install AWS cli on the production machines. How can I go about this?
You would need some sort of program to call the Amazon S3 API to retrieve the object. For example, a PowerShell script (using AWS Tools for Windows PowerShell) or a Python script that uses the AWS SDK.
You could alternatively generate an Amazon S3 pre-signed URL, which would allow a private object to be downloaded from Amazon S3 via a normal HTTPS call (eg curl). This can be done easily using the AWS SDK for Python, or you could code it yourself without using libraries (it's a bit more complex).
In all examples above, you would need to provide the script/program with a set of IAM Credentials for authenticating with AWS.
Just adding notes for any C# code lovers to solve problem with .Net
Firstly write(C#) code to download private file as string
public string DownloadPrivateFileS3(string fileKey)
{
string accessKey = "YOURVALUE";
string accessSecret = "YOURVALUE";;
string bucket = "YOURVALUE";;
using (s3Client = new AmazonS3Client(accessKey, accessSecret, "YOURVALUE"))
{
var folderPath = "AppData/Websites/Cases";
var fileTransferUtility = new TransferUtility(s3Client);
Stream stream = fileTransferUtility.OpenStream(bucket, folderPath + "/" + fileKey);
using (var memoryStream = new MemoryStream())
{
stream.CopyTo(memoryStream);
var response = memoryStream.ToArray();
return Convert.ToBase64String(response);
}
return "";
}
}
Second Write JQuery Code to download string as Base64
function downloadPrivateFile() {
$.ajax({url: 'DownloadPrivateFileS3?fileName=' + fileName, success: function(result){
var link = this.document.createElement('a');
link.download = fileName;
link.href = "data:application/octet-stream;base64," + result;
this.document.body.appendChild(link);
link.click();
this.document.body.removeChild(link);
}});
}
Call downloadPrivateFile method from anywhere of HTML/C#/JQuery -
Enjoy Happy Coding and Solutions of Complex Problems

'Error 403: The account for the specified project has been disabled' while accessing google cloud storage API from java

created multiple accounts, every time 1$ charged from cc.then I am able to create bucket in https://console.cloud.google.com/, after that I start accessing the bucket from my java code as bellow, then account getting blocked, I tried multiple times.
java code :
creating credentials
HttpTransport httpTransport = new NetHttpTransport();
JsonFactory jsonFactory = new JacksonFactory();
List<String> scopes = new ArrayList<String>();
scopes.add(StorageScopes.DEVSTORAGE_FULL_CONTROL);
Credential credential = new GoogleCredential.Builder()
.setTransport(httpTransport)
.setJsonFactory(jsonFactory)
.setServiceAccountId(
propsReaderUtil.getValue(ACCOUNT_ID_PROPERTY))
.setServiceAccountPrivateKeyFromP12File(
new File(getClass().getClassLoader().getResource(propsReaderUtil.getValue(
PRIVATE_KEY_PATH_PROPERTY)).getFile()))
.setServiceAccountScopes(scopes).build();
storage = new Storage.Builder(httpTransport, jsonFactory,
credential).setApplicationName(
propsReaderUtil.getValue(APPLICATION_NAME_PROPERTY))
.build();
uploading stream
Storage storage = getStorage();
StorageObject object = new StorageObject();
object.setBucket(bucketName);
InputStream stream = file.getInputStream();
try {
String contentType = URLConnection
.guessContentTypeFromStream(stream);
InputStreamContent content = new InputStreamContent(contentType,
stream);
Storage.Objects.Insert insert = storage.objects().insert(
bucketName, null, content);
insert.setName(file.getName());
insert.execute();
} finally {
stream.close();
}
Please let me know if I am doing something wrong, or suggest me best way to do this.
Any suggestions appreciated...
Thanks in advance...
Error 403 is an example of an error response you receive if you try to list the buckets of a non-existent project or one in which you don't have permission to list buckets.
The account associated with the project that owns the bucket or object has been disabled. Check the Google Cloud Platform Console to see if there is a problem with billing, and if not, contact account support.
More information can be found in HTTP Status and Error Codes.

Google BigQuery Service Account Credentials using JSON file in C# application

While Creating Service Account for Google BigQuery, There are two key file type. 1. P12 Key File 2. JSON Key File.
I can able to connect Google BigQuery with Service Account Credentials using P12 Key File by using following code.
String serviceAccountEmail = "XXXX#developer.gserviceaccount.com";
var certificate = new X509Certificate2(#"FileName.p12", "Secret Key", X509KeyStorageFlags.Exportable);
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(serviceAccountEmail)
{
Scopes = new[] { BigqueryService.Scope.Bigquery, BigqueryService.Scope.BigqueryInsertdata, BigqueryService.Scope.CloudPlatform, BigqueryService.Scope.DevstorageFullControl }
}.FromCertificate(certificate));
BigqueryService Service = new BigqueryService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = "PROJECT NAME"
});
Now i am trying to connect Service Account Credentials using JSON file type, but i could not get the proper syntax for creating.
How can we connect Google BigQuery with Service Account Credentials using JSON File?
Thanks,
I got the link, Which indicates Service Account Authentication using JSON file in C# application is not yet added in Google BigQuery API, So i would like to close the question.
https://github.com/google/google-api-dotnet-client/issues/533
It is now possible (I used v 1.13.1.0 of Google APIs).
GoogleCredential credential;
using (Stream stream = new FileStream(#"C:\mykey.json", FileMode.Open, FileAccess.Read, FileShare.Read))
{
credential = GoogleCredential.FromStream(stream);
}
string[] scopes = new string[] {
BigqueryService.Scope.Bigquery,
BigqueryService.Scope.CloudPlatform,
};
credential = credential.CreateScoped(scopes);
BaseClientService.Initializer initializer = new BaseClientService.Initializer()
{
HttpClientInitializer = (IConfigurableHttpClientInitializer)credential,
ApplicationName = "My Application",
GZipEnabled = true,
};
BigqueryService service = new BigqueryService(initializer);

Amazon S3 Response DotNetZip MVC

My application (MVC) needs to download, zip and return one or many files from Amazon S3. I am using the .NET SDK and GetObject to receive the files, and want to use DotNetZip to then zip them up and return the generated zip file as a file stream result for the user to download.
Can anyone suggest the most efficient way of doing this, I am seeing OutOfMemory exceptions when downloading large files from S3, they could be up to 1gb in size for example.
My code so far;
using (
var client = AWSClientFactory.CreateAmazonS3Client(
"apikey",
"apisecret",
new AmazonS3Config { RegionEndpoint = RegionEndpoint.EUWest1 })
)
{
foreach (var file in files)
{
var request = new GetObjectRequest { BucketName = "bucketname", Key = file };
using (var response = client.GetObject(request))
{
}
}
}
If I copy the response into a memory stream and add that to the zip, all works ok (on small files), but with large files assume I cannot store the entire thing in memory?